title
stringlengths 6
301
| keywords
listlengths 0
23
| instruction
stringlengths 73
261k
|
|---|---|---|
Unpacking the combined effects of job scope and supervisor support on in-role performance
|
[
"Supervisor support",
"In-role performance",
"Job characteristics theory",
"Job scope",
"Social support theory"
] |
Summarize the following paper into structured abstract.
Introduction: Organizations of various sectors are not giving proper attention toward the importance of social context in highly challenging working environment for employees. The social context in highly challenging work settings plays a vital role in shaping employees' proficiencies and behaviors. Social context at work captures the interpersonal interactions that develop as employees perform their jobs, roles and tasks (Chou, 2015; Grant and Parker, 2009). The relational perspective of work design focuses on how the social aspects of a job combined with job roles and/or tasks advance an employee's understanding of the job and how this understanding can impact organizational outcomes (Grant and Parker, 2009). In this regard, social exchange theory is a very useful approach for integrating relational or social work context variables into JCM. The JCM posits that five core job characteristics make jobs intrinsically motivating and satisfying and encourage the achievement of high performance (Hackman and Oldham, 1980). Despite the popularity and continuous use of JCM in job enrichment literature, the model has received multiple criticisms from other scholars (Grant et al., 2010).
Theoretical background and hypotheses: The JCM framework of Hackman and Oldham (1975) includes the five job characteristics that influence work-attitudes and behaviors: skills variety, task identity, task significance, autonomy and feedback. They also suggested that GNS, a trait associated with internal energy achievement orientation, acts as moderator in the job characteristics/outcomes relationship. Although JCM continues to enjoy strong support among scholars for the proposed direct relationship between core job characteristics and outcomes, there is mixed evidence confirming the moderator effects of GNS or other related variables such as personality (e.g. Xie and Johns, 1995). We will first briefly review the research on the relationship between job characteristics and job performance and then discuss the proposed moderating effects of supervisor support in shaping this relationship.
Method: Data collection and sample
Discussion: Overall, there were two hypotheses and these hypotheses were confirmed. More specifically, our results pertaining to the hypothesized relations can be summarized as follows; all predictions related to direct relations of job scope with in-role performance, interaction effects of job scope and supervisor's support were confirmed.
Conclusion: This study proposed that the relational context of jobs, particularly in the form of social support by supervisors, can motivate employees to perform their duties more productively. It also highlights how the structure of an employee's work with the manifestation of social support plays a critical role in shaping the employee's relationships with their supervisors. This study highlights that both job design and in-role performance have a strong link with relational contexts that can motivate employees to achieve their stated targets. Thus, this research improves our understanding of how social context at the workplace can make a difference for employees and their organizations.
|
Family firms, board structure and firm performance: evidence from top Indian firms
|
[
"India",
"Family firms",
"Corporate governance",
"Firm performance"
] |
Summarize the following paper into structured abstract.
1. Introduction: Family businesses are the most dominant among publically traded firms across the world (Shleifer and Vishny, 1986; Burkart et al., 2003; Anderson and Reeb, 2003; La Porta et al., 1999). In Continental Europe, about 44 per cent of publicly held firms are family-controlled (Faccio and Lang, 2002). In the USA, the equity ownership concentration is modest; among the Fortune 500 firms, around one-third is family firms (Anderson and Reeb, 2003). The concentration of ownership was found to be higher in other developed nations (Faccio and Lang, 2002; Franks and Mayer, 2001; Gorton and Schmid, 1996). Family businesses dominate many developing economies with about two-third of the firms in Asian countries owned by families or individuals (Claessens et al., 2000). In India, around 60 per cent of the listed top 500 firms are family firms (Chakrabarti et al., 2008). These family firms hold large equity stakes and more often than not have family representation on the board of directors. Family equity stake in Indian firms may be divided across individual holdings by promoters and their family members, privately held firms and through cross-holdings from other listed group businesses. The control and influence exerted by family firms may lead to performance difference with the non-family firms (Anderson and Reeb, 2003).
2. Potential benefits of family firms: Controlling blockholders like family can have potential benefits and competitive advantage. The extent literature on family firms focuses mostly on the agency problem. The problem in widely held firms includes limited ability to select reliable agents, monitor the selected agents and ensure performance (Fama and Jensen, 1983). An effective monitoring can improve firm performance and reduce agency cost (Fama and Jensen, 1983; Jensen and Meckling, 1976). The principle agent conflict associated with widely held firms may be reduced by concentrated blockholders, specifically the family blockholders, as they have a significant economic incentive to monitor the management (Demsetz and Lehn, 1985). Specifically, the substantial intergenerational ownership stake and having a majority of their wealth invested in a single business, the family firms have strong incentive to monitor the management. The lengthy involvement of family members in the business, in some cases spanning generations, permits them to move further along the learning curve. This superior knowledge allows the family members to monitor the managers better (Anderson and Reeb, 2003). Also, this long-term presence of family members in their firm leads to longer investment horizons than other shareholders and may provide an incentive to invest according to market rules (James, 1999). This willingness of family firms to invest in the long-term project leads to greater investment efficiency (Anderson and Reeb, 2003; James, 1999; Stein, 1988). Another advantage of having a long-term presence of family firms is that the external suppliers, dealers, lenders, etc., are more likely to have favorable dealings with the same governing bodies, owing to the long-term dealings and reputation, than with nonfamily firms. This sustained presence of family necessitates having their reputation maintained (Anderson and Reeb, 2003).
3. Potential costs of family firms: Family firms are also said to be fraught with nepotism, family disputes, capital restrictions, exploitation of minority shareholders and executive entrenchment, all of which adversely affect firm performance (Allen and Panian, 1982; Chandler, 1990; Faccio et al., 2001; Gomez-Mejia et al., 2001; Perez-Gonzalez, 2006; Schulze et al., 2001, 2003). Expropriation of minority shareholders by large shareholders may generate additional agency problem (Faccio et al., 2001). Concentrated shareholders, by virtue of their controlling position in the firm, may extract private benefits at the expense of minority shareholders (Burkart et al., 2003). Capital expenditure can be affected by the families' preference for special dividends (DeAngelo and DeAngelo, 2000). Employee productivity can also be adversely affected by family shareholders acting on their own behalf (Burkart et al., 1997). Family firms are found to show biases toward business ventures of other family members resulting in suboptimal investment (Singell, 1997). Family shareholders tend to forgo profit maximization activities owing to their financial preferences which often conflicts with those of minority shareholders (Anderson and Reeb, 2003).
4. Research design: data, variables and analysis: 4.1 Sample
5. Findings and discussions: 5.1 Summary statistics
6. Conclusion: In this study, we analyze the interaction effect of the family firms and board governance factors on firm performance in a sample of top publically traded Indian firms. We contribute to the growing literature on family firms by providing a multi-year analysis on the influence of board structure on firm performance in a family firm vis-a-vis nonfamily firm in the Indian context. In this study, we endeavor to expand our understanding of corporate governance and to shed light on the impact of the proportion of shareholding, family representative directors and having a professional CEO in large family firms in India. The result of the panel data analysis shows that the interaction variable (family firms x board score) has a statistically significant negative associated with firm performance measured by both Tobin's q and ROE. This is consistent with the results by Garcia-Ramos and Garcia-Olalla (2011) in the European context. Our result suggests that the incremental effect of the board index score, for family firms with respect to nonfamily firms, is having a negative impact on the firm performance.
|
Sustainable organisational learning - a lite tool for implementing learning in enterprises
|
[
"Evaluation",
"Organizational learning",
"Competence development",
"Learning technology",
"Learning transfer"
] |
Summarize the following paper into structured abstract.
Introduction: Unstable and dynamic societal and business environments are making it more imperative than ever that organisations have the ability to operate its learning processes in order to optimise competence development, performance and innovation (Brinkerhoff, 2006). Research-based knowledge on the subject has taught us that for enterprises to optimise the implementation of learning in practice, they need to take care of elements related to the learner, the learning situation and the organisational setting where new knowledge and/or competences are to be integrated and used.
Understandings of learning transfer: Today's learning research focuses on how to understand, design and make learning activities, formal as well as informal, an integral part of the existing organisational HRM and management system and practice. A classical theoretical understanding of learning in organisations and workplaces departs from an idea of learning as a matter of designing a learning and practice situation characterised by mutually shared or identical elements. This behavioural - and later developed within a cognitive frame - theory of transfer and sharing of knowledge typically settled in the common elements theory on shared traits between the learning and practice environments or symbolic representations. Later, Baldwin and Ford (1988) advanced a model that has had an immense influence on theoretical and empirical research contributions within the field of learning studies in adult learning studies and human resource. The distinction between three broad dimensions, the learner's characteristics, the design of the learning situation and the work environment settings, has guided research- and practice-oriented contributions within the organisational and workplace learning field for decades.
A lite tool for creating sustainable organisational learning: The studied and described tool of this paper (named SuperInsight) is an evaluation and development-oriented organisational learning technology devised with the aim of strengthening a sustainable integration and anchoring of new knowledge and/or competences in practice. The evaluation tool is deliberately designed as a lite version of an evaluation tool in order to avoid, as we often experience, bulky and highly bureaucratized (red tape) procedures when it comes to working with measurement and integration of learning activities in a practice setting. The purpose of the lite tool is two-folded. First, the lite tool aims to reinforce enterprises ability to keep track of, on a running basis, the impact of learning activities in practice based on real-time data. Second, the lite tool intends to support an organisational learning process among the participating enterprises based on continuous input and feedback on the quality of the learning process combined with unique and specific insight on where to intervene - if needed - in order to optimise the learning transfer process and result.
Illustrations from a case study: Analysis of data from a case study in a large Danish enterprise from the telecommunications industry conducting leadership and sales training shows that participants who are in the red category, usually are drowning in operations and do not have the possibility to focus on learning implementation on new knowledge or competences in a work context. A characteristic comment from a participant in the red category is: "I will start to work with the new knowledge, when my department get more resources" or "We are firefighting all the time". Here, the role of the closest leader or accountable HR staff is to support the participant in prioritising their work tasks to facilitate the actual use of new knowledge or competences - if the organisation truly wants to be serious about their approach to organisational learning. The analysis shows that constraints in employing new knowledge and competences are not related to the actual course/training activity, nor personal characteristics, but to the concrete work processes and tasks.
How to strengthen sustainable organisational learning - lessons learned and recommendations for practice: In this paper, we argue for a double effect from using and deploying the described lite evaluation tools that connect more clearly to the real-time learning processes than a classical "baseline-midline-endline" effect evaluation offer. We see in our case study a positive effect on the individual level using new knowledge and/or competence in a work context. In addition, we see a positive effect on an organisational level. Enterprises that employ process evaluation tools with pulse measures learn to develop, create, integrate and share new knowledge and competences, thus creating a foundation for organisational learning through new routines and practices. In the following section, we will elaborate on lessons learned from our case study. Further, we will present recommendations for practice.
Conclusion: We have asked how enterprises are to arrange its learning processes in order to optimise the integration and creation of sustainable organisational learning. Based on a learning evaluation tool that makes a processual real-time evaluation of learning implementation of new knowledge and/or competences, we explored from the case study data how this type of tool influences learning processes and results in organisations.
|
The Irish wine market: a market segmentation study
|
[
"Ireland",
"Wines",
"Market segmentation",
"Brands",
"Marketing"
] |
Summarize the following paper into structured abstract.
Introduction: The Irish wine market has experienced unprecedented growth in the last from 15 to 20 years. From 1990 to 2007, total wine sales in Ireland have more than quadrupled, increasing from 1.7 to 7.6 million cases. In the 13 years between 1994 and 2007, wine's proportion of the Irish alcohol market more than doubled from 8 per cent to 17.9 per cent (Wine Development Board, 2007). Growth in wine consumption is forecasted to continue with a growth of 15 per cent expected by 2012 (Euromonitor, 2008). As the wine drinking culture in Ireland is relatively new, the segmentation of the market and brand positioning is in its infancy. Further study into segmentation is required to improve the profitability of the industry, and to develop choice and the accessibility of wine for Irish consumers. The specific purpose of the paper is to examine how the Irish wine market may be effectively segmented for improved brand positioning in Ireland. Thus, the paper aims to determine the key trends in the Irish wine market, examine the state of marketing in the wine industry, evaluate different approaches to segmenting the Irish wine market, and develop profiles of the resulting segments.
The Irish wine market: The Irish wine market has experienced remarkable growth with the number of wine drinkers in Ireland doubling since 1990 and with over five times as much table wine being consumed in 2007, as was consumed in 1990 (WDB, 2007). The increased consumption of wine in Ireland over the last 15 years is attributed to the improved accessibility, affordability and branding of wine (Moran, 2002). To emphasise the significance of the growth in wine consumption in Ireland, the level of growth in wine buying is compared with growth in the overall food and beverage sector. Practically all wine bought in Ireland is imported. Between 2000 and 2004, wine sales (and therefore imports) increased by 56 per cent (WDB, 2004), while the overall imports of the food, beverages and other animal products category increased by only 18 per cent in the same time frame (CSO, 2006).For a marketer assessing the Irish wine market, equally important to the growth in consumption, is the huge shift in the type of wine preferred by the Irish wine drinker. Specifically, there is a notable shift towards New World wines, with diminishing preference for Old World wines (WDB, 2007). New World wines refer to wines from regions outside of Europe. Prominent New World wine producing regions include South Africa, Australia, New Zealand, Chile, Argentina and California. Old World countries refer to European countries with a long history of wine production, such as France, Italy, Germany and Spain (Fielden, 1994). Up to 1990, the majority of wine consumed in Ireland was Old World wine, and accounted for 94 per cent of the market (WDB, 2007). Since 1990, there has been a steady shift in demand towards New World wine, which in 1990 accounted for 6 per cent of the market and in 2007 held a 71 per cent market share (WDB, 2007). Historically, French wines were the market leader in the Irish wine market, but since 2001, Australia now has the largest market share, accounting for 26 per cent of wine consumed in Ireland (WDB, 2007). In 2006, the top ten wine brands in the Irish still light wine market accounted for nearly 25 per cent of total sales. These top ten wine brands and their respective brand shares are Jacob's Creek (3.2 per cent), Blossom Hill (2.7 per cent), Rosemount (2.6 per cent), E&J Gallo (2.5 per cent), Wolf Blass (2.5 per cent), Hardys (2.3 per cent), Concha Y Toro (2.3 per cent), Long Mountain (2.2 per cent), Santa Rita (2.1 per cent) and Carmen (1.9 per cent), all of which are from New World countries (Euromonitor, 2008). Understanding the shift in the country of origin preference is important as it represents an important shift in preference for style, taste, brand, price and other wine variables. Testing country of origin preference is, therefore, an essential element in this Irish wine market study.
Marketing of wine: The production of wine is a specialised area, and the wine industry has traditionally adopted a production-focused mindset with the complexities of viticulture and vinification having occupied the attention of specialists in the area (Thomas and Pickering, 2003). Bruwer et al. (2002) note the agricultural basis of the wine value chain, and the industry is often criticised as employing mass marketing campaigns (Gluckman, 1990; Spawton, 1991a; Hall and Winchester, 1999; Bruwer et al., 2002).According to Thomas and Pickering (2003), the marketing of wine is in its infancy, relative to the long history of wine making and wine drinking. Aggressive marketing was uncommon in the industry with vineyard operators relying on the strength of their reputation to compete in the marketplace (Hall, 2004). In the early 1990s, an interest in branding emerged in the industry as a method of coping with changes in distribution and the growth of wine retailing. In 1991, the European Journal of Marketing dedicated an issue to the marketing of wine, where Spawton (1991b, c, d, e) through a series of articles, provides an insight into the state of wine marketing. This special issue served as an introduction to marketing for the industry, with the purpose of illustrating the advantage and necessity in developing a customer mindset.There is a realisation in the industry that its future is geared to meeting the expectations of the wine consumer. That has contributed to the growing importance of wine marketing within the industry (Spawton, 1991b, p. 6).Gluckman (1990), in a frequently cited article, presents a number of challenges facing the wine marketer in branding wines. Most notably, a move towards own brand labelling by retailers reinforces the necessity for strong wine brands. This need for improved branding of wine has prompted the undertaking of research in wine markets. In an industry which is relatively new to marketing, and in the Irish wine market, which has seen tremendous growth and transformation, there is a need for greater understanding of the market dynamics. Market research in general, and market segmentation in particular, has a potentially pivotal role to play in assisting wine marketers to position their wine brands effectively.
Role of market segmentation: Weir (1960, p. 95, as cited in Yankelovich, 1964) provides the following description of what a market is, and more importantly, what it is not:"The market" is not a single, cohesive unit; it is a seething, disparate, pullulating, antagonistic, infinitely varied sea of differing human beings - every one of them as distinct from every other one as fingerprints; everyone of them living in circumstances different in countless ways from those in which every other one of them is living.This description of a market is a colourful representation of popular marketing thought on the composition of markets in the late 1950s and 1960s. Smith (1956), in what is considered a landmark article (Reynolds, 1965; Haley, 1968; Wind, 1978; Green and Krieger, 1991; Lin, 2002) introduces a marketing strategy labelled market segmentation, as an approach to competing successfully in the reality of an environment of imperfect competition. The original article by Smith (1956) introduces market segmentation as a strategy. Market segmentation strategy was considered an alternative to product differentiation strategy to deal with diversity in the market. While the initial representation of the market segmentation strategy is based in economic theory, market segmentation developed as one of the most foremost concepts in marketing thought (Wind, 1978; Johnson et al., 1991; Lin, 2002).At a broad level, market segmentation provides a marketer with a clearer focus on customer needs, and thereby aids decision making for improved competitive advantage (McDonald and Dunbar, 1992; Croft, 1994; Kotler and Keller, 2003). While Croft (1994) highlights market segmentation as aiding decision making in general, Yankelovich (1964) specifies what exactly segmentation analysis can achieve. In identifying groups of customers with similar needs, a marketer has the information required to target the most profitable group with the most potential. With this knowledge a marketer can develop product lines and promotion activities, choose advertising media, advance positioning of offerings and improve timing of advertising to appeal to the segment of the market whose needs possess the greatest profit potential.A critical decision to be made in conducting segmentation research is choosing an appropriate segmentation base. A segmentation base is the criteria used to divide the defined market into groups of consumers with similarities. At the most basic level a market can be split up according to the profiles of the consumers. Variables such as demographics, geographic location of consumers and the socio economic class to which they belong, are considered profile segmentation bases. The behavioural segmentation category includes bases such as usage occasion, benefits sought, perceptions and beliefs, while the psychographic bases category includes lifestyle and personality variables as a means for identifying groups of consumers with similarities. The more abstract and less concrete the information required for the segmentation base, the more difficult it is to measure responses and their link with behaviour. In choosing a segmentation base for a wine market study there are a number aspects to the market which need to be considered. Literature on wine consumer behaviour focuses on two areas; the factors influencing wine consumer behaviour; and the wine consumer's purchasing decision-making process. An effective wine segmentation study would be one which aids understanding of these two areas and aids marketers in evaluating how stimuli, such as brand positioning strategies, influence wine choice.According to Bruwer et al. (2002), wine markets have been segmented using all the bases identified above. For the purpose of an Irish wine market segmentation study, behavioural segmentation with an involvement basis proves a suitable choice as it is an approach, which yields insight into consumer behaviour, but is not overly difficult to measure (Lockshin et al., 2001). Employing a behavioural segmentation base allows for the decision making process and the influencing factors of the Irish wine consumer to be tested and makes the process and the factors the basis for splitting up the market into meaningful and actionable segments.A key consideration in exploring the wine consumer decision-making process and consumers' evaluation of alternatives, is that wine attributes represent intrinsic and extrinsic cues for the consumer. Sanchez and Gill (1998) illustrate how consumers have preferences according to the bundle of benefits they are seeking. The challenge in understanding these preferences is the large number of wine attributes which exist, and therefore, the greater number of possible bundles of benefits that are present. Wine attributes include: brand name, producer, grape variety, blend of grape varieties, vintage, region of origin, price, label, bottle type, cork type, bottle size, colour of wine, style of wine and level of alcohol. Due to the large number of wine attributes, wine consumers have a wider range of considerations in making purchasing decisions. Examining the hierarchy of importance of wine attributes to Irish wine buyers is a central consideration in segmenting the market.
Methodology: The research design is primarily descriptive in nature as similar investigations into other wine markets have been descriptive in design (Orth et al., 2005; Johnson et al., 1991; Hall, 2004; Bruwer et al. Li and Reid, 2002; Johnson, 2003; Thomas and Pickering, 2003). Due to the descriptive nature of this research, a quantitative approach to primary data collection is most suitable. Quantitative data is appropriate for determining and understanding the behaviours and characteristics of a large sample of wine drinkers. Specifically, survey data collection was undertaken, with a questionnaire collection instrument administered through a personal interview. The questionnaire was two pages in length with 15 tick box questions. The questionnaire posed questions to gather data on four topics: volume of usage, buying preference, product involvement, and demographic information.As an accurate sampling frame was unavailable for the population of the 1,451,000 wine drinkers in Ireland (WDB, 2004), non-probability sampling was undertaken. The sampling type was convenience sampling, as wine buyers were approached at the point of purchase. Convenience sampling has been employed in previous wine segmentation studies, namely, Australian wine market research by Hall (2004) and Bruwer et al. (2002). To ensure the sample was as representative of the population as possible, a large sample size of 300 was chosen and the questionnaire was administered in a variety of outlets to gather information from wine drinkers with wide ranging involvement levels.The fieldwork took place over three weeks in June 2006, in eight wine selling outlets in Galway City and County. This approach is similar to other wine segmentation studies (Bruwer et al., 2000; Hall and Winchester, 1999) where the fieldwork was limited to one region of the market being researched. Research by Bruwer et al. (2000) in segmenting the Australian wine market using a wine-related lifestyle approach is based on fieldwork conducted in Adelaide, while Hall and Winchester's (1999) findings, confirming empirically segments in the Australian wine market, are derived from questionnaires administered in Melbourne. The fieldwork locations for this research consisted of four supermarkets, two off licences and two wine shops. The supermarkets and off licences were in both Galway City and Galway County locations, and the wine shops are situated in Galway City. In total, 316 questionnaires were collected, and after nine were removed for failing a screening question or being incomplete, 307 questionnaires were coded and inputted into SPSS for analysis.
Findings: There are two sets of findings resulting from the primary research: an overall wine sample analysis and a segment analysis. The overall wine sample is composed of 64 per cent female respondents and 36 per cent male respondents. In terms of age group, over 75 per cent of the sample is aged between 25 and 54 years. The consumer behaviour data reveals the average Irish wine drinker buys seven bottles of wine per month, spending EUR10.57 a bottle, with an average monthly spend of EUR80. Wine is usually bought in a standard size bottle of 75 cl (95 per cent) in either a supermarket (42 per cent) or off licence (35 per cent) and is mostly red wine (43 per cent). Wine is most frequently consumed when dining at home (50 per cent ), followed by when dining out (20 per cent). The five most important product attributes when buying wine are: price per bottle, style of wine (e.g. fruity), region of origin (e.g. Burgundy) and brand name (e.g. Jacob's Creek). The most popular wine is Australian wine, followed by Chilean, French and South African wine.A comparison of the overall wine sample findings with the Wine Development Board's (2004) national statistic shows the characteristics of the research sample are similar to the characteristics of the national market. The sample findings for the country of origin preference are notably similar to the WDB (2004) statistics (see Figure 1). One exception is US wine, which is preferred by just 3 per cent of the sample, but 13 per cent of the national statistics. The similarities between the two sets of statistics suggest the sample data findings on consumer behaviour are representative of the population.Similarly, Figure 2 compares the age category of the sample finding with the age categories of the WDB (2004) population. With the exception of the "65 or older category", the age profile of the two sets of data are similar. Discrepancies between the current research and the WDB (2004) findings may be explained by the time lapse between the two studies, or by the WDB research population being wine consumers, while the current research population was wine buyers.Segment analysis
Conclusion: The research examines how the Irish wine market can be effectively segmented to improve brand positioning. The increases in consumption of wine and increases in preference for wine from New World countries are key trends in the Irish wine market. Consumer behaviour, particularly involvement in wine purchases, and the importance of wine attributes, are necessary considerations in a wine market segmentation study. In terms of relevance, substance, and accessibility, a k-clustering segmentation design, with a behavioural basis including an involvement variable, proves to be an appropriate approach to segmenting the Irish wine market. The profiles of the three resulting segments: casual wine buyer, value seeking wine buyer and wine traditionalist, are sizeable, accessible, relevant and actionable.The profiles developed as a result of the primary research, provides wine marketers with an insight in Irish wine consumer behaviour. Specifically, marketers are provided with accessible and sizeable segments, with meaningfully distinctions and similarities drawn between them. Brand positioning can be improved by ensuring the brand communicates and emphasises the product attributes, which the targeted segments values the most when choosing wine. The demographic information and the buyer behaviour data provide marketers with points of access to their target market. The involvement base, when used in conjunction with other behaviour variables, proves effective in producing sizeable, accessible and actionable segments. A limitation of adopting a behavioural basis in conducting the segmentation is the highly descriptive nature of the resulting data. Examining behaviours give an insight into how consumers act, but fails to take into account the underlying motivations and rationale for consumer actions. The use of more complex segmentation bases, such as value systems and lifestyles would wield a richer understanding of the Irish wine consumer. A second suggestion for future research is an empirically tested wine market behavioural segmentation study, to confirm the findings in this research at a national level.In answering the research question, the Irish wine market can be effectively segmented, with a k-cluster design, with a behavioural basis. Effectively segmenting the Irish wine market requires more than the involvement variable, and calls for other behavioural variables, including the importance of product attributes and country of origin preferences to be included in the segmentation process.Opens in a new window.Figure 1
|
Consumers' utilization of reference prices: the moderating role of involvement
|
[
"Internal reference price",
"External reference price",
"Market‐based reference price",
"Involvement",
"Reference price utilization",
"Consumer behaviour",
"Prices"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: There is ample empirical evidence in support of the important role that reference prices (both internal and external) play in determining consumers' evaluations of posted prices (for comprehensive reviews of the many operationalizations of reference price, see Briesch et al., 1997; Mazumdar et al., 2005). Collectively, the body of evidence strongly supports the conclusion that consumers rely on (internal and/or external) reference prices to make judgments about perceived value, perceived fairness and perceived expensiveness that, in turn, influence purchase intentions. The inescapable conclusion for practitioners is that they must pay attention to understanding, measuring and managing consumers' reference prices to ensure that their prices are evaluated favorably.
Motivation and scope: The motivation for this paper comes from recognizing two main gaps in the literature. First, despite multiple definitions of reference price, prior research has not adequately addressed heterogeneity in the types of reference prices that are evoked and the ways in which consumers might process available price information in relation to multiple reference prices. This paper considers how one specific source of heterogeneity, involvement, influences consumers' utilization of (multiple) reference prices. Research on consumer information processing (see Bettman, 1979) suggests that several individual differences (e.g. involvement with the product and/or the purchase) may influence the type of information considered and the extent to which consumers are willing to elaborate on available information. This research attempts to fill a gap in the literature by investigating the role of involvement in the context of reference prices.Second, there have been very few attempts to examine the inter-relationship between different reference prices and their combined influence on consumers' evaluations. Previous research (Chandrashekaran and Jagpal, 1995; Rajendran and Tellis, 1994; Mayhew and Winer, 1992) has examined the simultaneous and independent influences of internal and external references on consumers' choices. However, questions about the inter-relationships between different types of reference prices remain. This study attempts to extend current thinking to include sequential effects of reference prices on consumers' evaluations. It is generally established in the literature that consumers may utilize some external reference prices to adjust internal reference prices, which in turn affects evaluation. However, some research (e.g. Rajendran and Tellis, 1994) has also shown that consumers may utilize certain external reference prices in conjunction with internal reference price. Hence, there is a need to validate the process empirically. In addition, the interrelationship between specific internal and external references prices along with the role of involvement has not been empirically investigated.In summary, this study proposes to extend the literature by attempting to answer two questions:1. Do all consumers evaluate a given offer against the same set of reference prices?2. Do all consumers utilize internal and external reference prices in the same way (i.e. using the same underlying process)?
Review of internal and external reference prices: When consumers evaluate retail prices, they may do so against several reference prices. Of these, some are internal and others are external. The term "internal reference price" (henceforth referred to as IRP) has generally been used to refer to price information held in consumers' long-term memories. For example, Winer (1986) proposed that consumers use past price information to form expectations of what the retail price is likely to be on the next purchase occasion. As described in the literature, these price expectations represent internally held reference prices, and they are based on past price information that must be retrieved from consumers' long-term memories during the purchase occasion. In fact, most empirical studies have based their measures of internal reference price on such a temporally generated construct (Kalwani et al., 1990; Mayhew and Winer, 1992; Rajendran and Tellis, 1994; Briesch et al., 1997). The literature offers compelling empirical evidence that such (memory-based) internal reference prices are crucial in predicting consumers' evaluations (of retail prices) and subsequent choice behavior.In contrast to internal reference price, external reference prices (henceforth referred to as ERP) refer to comparative (competitive) price information that is available in the purchase environment, i.e. prices that consumers are likely to encounter during the search process. Thus, consumers do not have to expend cognitive resources in trying to retrieve information on current market prices from their long terms memories. This information is likely to be obtained during the search process and utilized directly from working memory. Here, researchers have identified several possible references including normal (typically charged) market price (Urbany and Dickson, 1991) and the lowest market price (Biswas and Sherrell, 1993). These and other similar studies have shown that such market-based references are important predictors of consumers' choices.It is important to note the conceptual distinction between IRP and ERP. While IRP refers to price information that is stored in and retrieved from consumers' long term memories, ERPs are derived from price cues that are either present or evoked at the time of purchase. Specifically, ERPs may represent consumers' beliefs about current market prices that may be formed during the search associated with the purchase process. Such beliefs may include highest price, lowest available price and normal/average market price. Note, however, that there is likely to be significant heterogeneity (due in part to differing levels of involvement) in the extent to which consumers search for the lowest/best price. Such selective and differential exposure is likely to result in differences across individuals in their perceptions of ERPs.
Consumers' utilization of reference prices: It is well established that reference price is a complex, multi-faceted construct, and that reference prices are important in explaining consumers' brand choices (Briesch et al., 1997; Mazumdar et al., 2005). In addition, consumers are likely to use more than one type of reference price to evaluate an offer (Mayhew and Winer, 1992; Rajendran and Tellis, 1994; Chandrashekaran and Jagpal, 1995; Shirai, 2003). However, we know considerably less about the process by which consumers may combine/integrate multiple reference prices. Even less is known about individual differences in consumers' utilization of multiple reference prices.Prior research has acknowledged that different consumer groups (segments) are likely to evoke and use different sets of reference prices (Winer, 1986; Mazumdar and Papatla, 2000). Indeed, prior research has demonstrated that several factors lead to differences in the references used to evaluate a posted price. For example, reference price utilization and price perceptions have been linked to frequency of purchase (Thomas and Menon, 2007), product type (Chandrashekaran and Jagpal, 1995; Lowe and Alpert, 2007) and price consciousness (Alford and Biswas, 2002; Palazon and Delgado, 2009). In summary, prior research supports the claim that different segments may evoke and utilize different sets of reference prices, and suggests that it may not be appropriate to assume that all consumers utilize the same types (and number) of reference prices.As emphasized by Lowe and Alpert (2007) and by Mazumdar et al. (2005), despite some research on the topic, we do not know much about how consumers integrate multiple reference prices. More importantly, we need a better understanding of the factors that may determine which type(s) of reference price(s) are evoked and utilized when making price-related judgments. Along those lines, this research proposes to investigate how the level of involvement leads to differences in the number and types of reference prices that are used in the evaluation process. More specifically, it focuses on how consumers combine and utilize internal reference price (operationalized as expected price) and market-based external reference prices (operationalized as subjective knowledge of normal and lowest market prices)[1]. Consistent with established theory, this research acknowledges that consumers may adjust their internal reference prices based on their perceptions of market-based references. However, the conceptualization adopted here (explained later) also allows for consumers to evaluate posted prices directly against market-based (external) references.Regarding the relative impacts of the two types of reference prices on consumers' evaluations, there is some empirical evidence (Mayhew and Winer, 1992) suggesting that external reference prices may be more influential than internal reference prices. However, that conclusion might be premature because previous research has not paid attention to the moderating roles of individual level variables, for example involvement, that are likely to influence how consumers choose to allocate their cognitive resources - this is likely to impact the relative emphasis that consumers place on the two types of reference prices when evaluating a posted price.
The role of consumers' involvement: Consumers' involvement with a product class has been shown to influence consumers' information processing strategy (Bettman, 1979; Bloch et al., 1986) by affecting the extent of search, the type of information sought, the relative importance of different types of information (e.g. price versus product attribute information), and the motivation to process available information. High involvement is generally associated with product knowledge (familiarity) and the motivation to search for and use relevant information. On the other hand, low involvement has been associated with low product/price knowledge (lack of familiarity), and the lack of motivation to engage in detailed processing of information.Although there is substantial research dealing with how involvement affects consumers' use of product-attribute information and on how involvement affects consumers' responses to marketing communications (advertising), research in the (reference) pricing context is relatively limited. Available evidence reveals that such variables as involvement, prior knowledge, and price consciousness moderate consumers' internal reference prices and their acceptance of retail prices. These variables have been found to affect consumers' internal standards (Kosenko and Rahtz, 1988; Lichtenstein et al., 1988; Biswas and Sherrell, 1993), the widths (i.e. upper and lower limits) of their price acceptance regions (Lichtenstein et al., 1988; Rao and Sieben, 1992), and, finally, their confidence in their reference price estimates (Biswas and Sherrell, 1993). However, we know less about how such factors are likely to influence the ways in which consumers may combine/utilize multiple reference prices. The current study focuses on one such factor, i.e. involvement.
Hypotheses: This research explores the notion that different reference price utilization strategies unfold for consumers with high and low involvement. The idea that involvement affects which decision route consumers are likely to follow is not new. Extant research on consumers' information processing has shown that involvement affects the way consumers acquire, store, retrieve, and use relevant information to make decisions (Bettman, 1979; Bloch et al., 1986). Compared to those who are not involved, involved consumers are more motivated to search for relevant information. They also possess better knowledge about products and prices than consumers who lack involvement. Therefore, these consumers are more likely to possess well-defined internal standards than low involvement consumers (see also Chandrashekaran and Jagpal, 1995). Consequently, they are likely to be more comfortable evaluating retail prices against their internal (memory-based) reference prices. In contrast, less involved consumers are not as motivated as their involved counterparts to engage in extensive search behavior. In addition, they not likely to engage in detailed processing of available (price) information. As a result, their internal standards are not likely to be well defined, and they are not confident about acting on this information (Chandrashekaran and Jagpal, 1995).Consistent with such theorizing, Biswas and Sherrell (1993) found that high-involvement consumers are indeed more confident of their internal reference price estimates than low-involvement consumers. Therefore, there is some empirical evidence to support the expectation that the level of involvement is likely to moderate consumers' utilization of reference prices.Consistent with the notion that consumers are likely to possess reference prices that are brand specific (Briesch et al., 1997), a brand's own previous prices may be considered to be the most relevant information on which to base a reference price for the brand. Thus, it may be reasonable to expect high involvement consumers' evaluations to be most closely associated with a summary of a brand's past prices, i.e. the expected price construct. Briesch et al. (1997) empirically compared five different model formulations from the literature and confirmed that a model using a temporally generated, brand-specific reference price is the best predictor of consumer choice. However, the authors did not examine individual differences in the utilization of past price information.As discussed earlier, low involvement individuals are by definition not equipped with the knowledge to evaluate price on the basis of relevant factors. Recall that these consumers are not expected to internalize (past) price information, and thus cannot be expected to have well-defined (reliable) price expectations. Therefore, these individuals are likely to evaluate a brand's current price on the basis of more readily accessible perceptions they may have formed of current market prices.In summary, high- and low-involvement consumers are expected to follow different processes in evaluating the same objective price information. Highly involved consumers base their evaluations on internally held beliefs. However, because they are exposed to market price information in the course of their search, it is likely that they utilize this information to update (fine-tune) their internal standards to be consistent with current market information. In contrast, low involvement consumers are not motivated to devote cognitive resources to storing price information in their long-term memories and, therefore, are less confident about their knowledge of past prices. Consequently, they are likely to rely on immediately available external cues (i.e., market-based reference prices) to make their evaluations (as opposed to first updating their internal reference prices and, subsequently utilizing them to evaluate posted prices).More formally, it is expected that:H1. High-involvement consumers evaluate retail prices only against their price expectations (i.e. internal reference price).H2. High-involvement consumers utilize market-based external references only to update their internal beliefs (expected prices).H3. Low involvement consumers evaluate retail prices against one or more market-based (external) reference prices.H4. Low involvement consumers do not utilize internal reference prices (price expectations) in evaluating retail prices.Figure 1 presents the hypothesized model on the processes by which market-based and truly internal reference prices affect consumers' evaluations.
Data collection: Two hundred undergraduate business majors attending a large Northeastern state university participated in this study. The data required for this study was collected in four stages, over a two-week period. The entire data collection task was designed in the form of an experiential exercise.Stage one: initial measures
Analysis: The first task was to split the sample into high and low groups based on their involvement with the product category (jeans). A median split of the distribution of PII scores (median involvement=103) yielded high and low involvement groups of approximately equal sizes (nhigh=111 and nlow=114). In addition, the mean involvement scores in the two groups (mean=119.26 and SD=10.50 in the high-involvement group versus mean=83.03 and SD=16.87 in the low involvement group) were significantly different (t=19.27, p<0.001). Finally, an examination of the mean knowledge (familiarity) scores in the two involvement groups (mean familiarity=5.99 and 4.75 for high- and low-involvement groups, respectively) confirmed the premise that, in this product category, high-involvement consumers are significantly more knowledgeable than low-involvement consumers (t=7.91, p<0.01).To test the hypotheses in this study empirically, correlation matrices in each group were analyzed using the structural equations methodology (interested readers may refer to Joreskog and Sorbom, 1988). Figure 1 shows the model structure specified in both groups.The conceptual model shown in Figure 1 allows for the possibility that market-based reference prices (lowest price and normal price) may affect evaluations directly (along with expected price), or indirectly by first influencing expected price, which, in turn, affects how the offer is evaluated. Therefore, the model enables us to investigate the process by which consumers evoke, combine and utilize internal and market-based reference prices. Recall that this is an improvement over Rajendran and Tellis (1994), who examined how both temporal and contextual reference prices influence consumers' evaluations simultaneously. In relation to Figure 1, the hypotheses presented here predict that:* g11, g12, and b21> 0; and g21 and g22=0 in the high-involvement group; and* b21, g11, and g12=0; and g21 and g22>0 in the low-involvement group.As discussed earlier, consumers' evaluations were measured using three scale items corresponding to their perceptions of the value of the offer, the attractiveness of the deal, and their willingness to buy the item at the stated price. It could be argued that the concept of perceived value is related to consumers' perceptions of acquisition utility (AU), while their perceptions of the deal are related to transaction utility (TU) as defined by Thaler (1985). According to Thaler, these are conceptually different from purchase intentions (behavioral responses). However, principal components factor analysis resulted in a single factor and did not reveal conceptual differences between the three measures.
Results: Table I compares the two involvement groups on their reference price estimates and evaluations of the offer. It is interesting to note that there is no significant difference in reference price estimates across the two involvement groups (this point is also discussed later in the manuscript). Furthermore, there are no major differences between the two groups in their evaluations of the offer, except that high involvement consumers perceive the deal to be moderately more attractive than low involvement consumers (p<=0.10)[4].However, one cannot conclude on the basis of these results alone that high- and low-involvement consumers are homogeneous, thus justifying pooled analyses. Although consumers' final evaluations are important, it is crucial that we examine the process (route) by which high and low involvement consumers arrive at the same destination. In addition, consumers' final evaluations are not central to this study. Rather, the primary objective of this study is to examine heterogeneity in consumers' utilization of market-based and truly internal reference prices to evaluate the same objective price information. Indeed, although consumers do not differ significantly in their final evaluations of the offer, this paper seeks to uncover heterogeneity in reference price utilization, i.e. the means by which high and low involvement consumers reach the end (their final evaluations).Table II shows the estimated models in each group along with several fit indices. It is clear that the specified model structure fits the data well in both groups (kh2 with six df=9.59, p=0.143 in the high involvement group; and kh2 with six df=6.09, p=0.413 in the low-involvement group). In addition, the structural models explain a significant proportion of the variance (R2=0.51 and 0.53 in the high- and low-involvement groups respectively). For both groups, the RMSEA (Steiger, 1985) and other fit indices indicate satisfactory model fit. All indicated paths are of the predicted sign, and are significant at a=0.01. All non-significant effects, i.e. paths that are not significantly different from zero, are indicated as "NS".From Table II, it is clear that consumers who are highly involved with the product evaluate the retail price against a single internal reference price, expected price (b21 is significant at p<=0.01), but do not utilize any market-based references to make evaluations (note that g21 and g22 are not significantly different from zero). These findings strongly support the hypothesis (H1) for consumers' utilization of truly internal reference prices under high involvement.H2 suggests that involved consumers are likely to use market-based references primarily to update their internal references. Consistent with this hypothesis, Table II shows that normal price (a market-based reference price) has a significant positive effect on the internal standards of involved consumers (g12=0.727 is significant at a=0.01). This finding is consistent with the view that involved consumers process price information sequentially, i.e. an estimate of the normal/average market price is used to update (fine-tune) the internal standard, which, in turn, is used to evaluate the retail price. Contrary to expectation, these consumers do not utilize their perceptions of the lowest market price (g11 is not significant). Thus, overall, H2 is partially supported.In contrast to high-involvement consumers, those in the low-involvement group do not use their internal standards to evaluate the retail price (b21=0). As shown in Table II, these consumers base their evaluations of the retail price on market-based references (g21=0.257 and g22=0.211 are both significant at a=0.01). Thus, H3 and H4 are supported. In conjunction, these findings support the conclusion that low levels of involved facilitate simultaneous utilization of several market-based reference prices to evaluate retail prices[5].It is interesting to note that highly involved consumers utilize only normal price, but not their perceptions of lowest price, in formulating their price expectations. Clearly, further research is required to be able to offer a convincing explanation to this observation. One possible explanation may be that these consumers may generally be more aware that the observed lowest price (a single data point) is likely to be a temporary promotional offer that does not reflect the "normal" price for the product/category. Consequently, these consumers may discount this information and rely to a greater extent on more comprehensive information, for example overall estimate of normal/average price. Urbany and Dickson (1991) found that consumers' estimates of normal prices are reasonable surrogates for their reference prices. However, the authors did not investigate individual differences in consumers' perceptions of normal/average prices. It is likely that one segment (e.g. high-involvement consumers) has more accurate estimates of normal prices than another segment (e.g. low-involvement consumers). Hopefully, future research will investigate and shed some light on this and other similar issues dealing with individual differences in price perceptions and evaluation.In summary, the results reveal that although the two types of consumers (high- and low-involvement) may have similar reference prices and may even evaluate an offer similarly, they do so via distinct routes, i.e. by utilizing different types of reference prices. Specifically, whereas highly involved consumers rely solely on their internal standards to evaluate retail prices, those who are less involved with the product category utilize several market-based references to judge the attractiveness of retail prices. A general discussion of the results along with implications and limitations of the study follows.
Discussion: This study intended to draw attention to the heterogeneity in consumers' utilization of multiple reference prices. The results obtained here offer some evidence that involvement affects consumers' utilization of reference prices. Although involvement does not affect the levels of consumers' internal standards or their final evaluations, it plays a significant role in the way consumers utilize these standards to evaluate the same objective price information. Highly involved consumers, who are more knowledgeable about the product class, evaluate offers against a single internal standard (expected price), which supports previous researchers' (e.g. Winer, 1986; Kalwani et al., 1990) in their use of expected price as a reference price to explain consumers' choices. However, low-involvement consumers do not use expected price to evaluate offers. Rather, they use two market-based references (lowest price and normal price) to determine the overall value of an offer. Thus, involvement affects the number and types of reference prices used in the evaluation process.The results are also consistent with information-processing theory (Bettman, 1979), which advocates that high involvement encourages a deeper and more complex processing strategy than low involvement, which supports less complex processing and the use of easily available information. Highly involved consumers retrieve price information stored in long-term memory and fine tune this internal standard (expected price) using current market price information (estimated normal/average market price). Thus, high involvement supports a hierarchical model in which market-based and truly internal reference prices are used sequentially. This conclusion is consistent with the underlying process implied in vast amount of research on reference price formation. In contrast, low-involvement consumers do not possess the motivation to engage in such a complex evaluation process. They simply utilize the more readily available market-based references and evaluate offers in a single step.It is only appropriate to point out that some of the results obtained here are inconsistent with previous research findings and call for further investigation. For example, Lichtenstein et al. (1988) found that consumers' price acceptability level (i.e. internal reference price level) is positively related to involvement. However, no such differences were found here. One possibility is that the results are being moderated by consumers' knowledge in this category. Rao and Sieben (1992) found that the upper and lower limits of consumers' price-acceptance regions increase with knowledge up to a point, and then levels off. It is likely that, given the nature of product category used in this study (jeans), most consumers were familiar with the product. Indeed, an examination of the mean knowledge scores revealed that both high- and low-involvement consumers lie in the upper half of the scale (means for high- and low-involvement groups=5.99 and 4.75 on seven-point scales), which may account for the lack of difference in their reference prices. Furthermore, product knowledge was assessed using a single item that measured subjects' familiarity with the product category. Additional research is needed to investigate this issue further.Another possible limitation of this study is the omission of several internal and market-based reference prices (e.g. fair price and highest price) that have been mentioned in the literature. It might be useful for future research to include more definitions of reference price to investigate whether the premise of this research holds. Finally, it may be useful to examine the roles of other moderating factors (e.g. gender, product experience, price consciousness, etc.).This research has important implications for both marketing research and practice. It is important for researchers to identify other sources of heterogeneity and their impact on consumers' construction and utilization of reference prices. The finding that high- and low-involvement consumers are different in the types of reference prices used and in the way they use this information to evaluate offers strongly supports the need for using different model structures to represent these sub-populations. From a practical standpoint, this study underscores the importance of including individual differences in strategies designed to affect consumers' perceptions of retail prices. For example, marketers might segment the market on the basis of reference price utilization and design different strategies to obtain optimal results. It is hoped that this study will stimulate further research in the area so that we may gain a better understanding of how consumers process and respond to retail pricing strategies. Such knowledge will undoubtedly be useful in designing more effective and efficient marketing strategies.
|
E-campaigning versus the Public Official Election Act in South Korea: Causes, consequences and implications of cyber-exile
|
[
"South Korea",
"Politics",
"Legislation",
"Elections",
"Internet",
"Social networking sites",
"E‐campaign",
"Election law",
"Cyber‐exile",
"YouTube",
"Network analysis"
] |
Summarize the following paper into structured abstract.
Introduction: In South Korea, restrictions on political speech surrounding elections are more stringent than in many other countries. The Public Official Election Act (hereinafter POEA, enacted in March 1994 as a result of integration of four different election laws corresponding to different layers of public administration) contains a number of provisions prohibiting campaign activities that would be standard practice in most democratic countries. According to Article 59, for example, official campaigns are allowed only during a period from the day following the closing date of candidate registration to the eve of the election. This amounts to 23 days in the case of presidential elections, and 14 days for elections of legislators and local governors. Even within this brief designated period, campaigns are subject to tight protocols prescribed in Articles 58 ("Definition of election campaign"), 60 ("Persons barred from election campaign"), 68 ("Campaign sashes"), 92 ("Prohibition of election campaign using motion pictures, etc."), 98 ("Restriction on use of broadcast for election campaign"), 99 ("Prohibition of election campaign by internal broadcast, etc."), and 100 ("Prohibition of use of recorders, etc."), to name a few.Having a more direct effect on individual voters are Article 90 and Article 93, which ban the display or distribution of election-related paraphernalia in the 180 days prior to an election. Section 1 of Article 93 states that "no one shall distribute or display advertisements, letters, posters, photographs, documents, drawings, printed materials, audiotapes, videotapes or the like that convey endorsement of or opposition to a candidate or a party" during the 180 days.The already overarching scope of Article 93(1) has been further stretched since the National Election Commission (NEC) and the Public Prosecutors' Office started to apply it to the online context in 1996. Tracking down the author of an online source and enforcing this provision is practically easy in Korean cyberspace, in that all Korean users are law-bound to verify their real identities (by providing their resident ID numbers) when joining major online services. Thereby, numerous bulletin board entries, blog posts, viewer comments on news sites, and user-generated content on Web 2.0 platforms have resulted in legal ramifications ranging from fines to imprisonment.The 2007 presidential election
Literature review: when YouTube meets politics: Given the mainstream popularity of Web 2.0 services in recent years, current literature offers extensive discussion regarding the political implications of the increasing practice of digitally mediated social networking and content sharing. Specific interest has been shown in the question of whether such practice increases voter turnout or other forms of political participation in the traditional sense of the term. Research findings have, however, been mixed so far; while some consider that the use of Facebook and other social networking sites (SNSs) for political purposes is a significant predictor of general political participation (e.g. Vitak et al., 2011), others suggest that reliance on SNSs is not necessarily related to the increase in political participation (e.g. Baumgartner and Morris, 2010; Zhang et al., 2010).This broad discussion itself is inconclusive and is in need of further exploration, but for the purposes of this paper, we decided to focus on the political activity on and through YouTube. This commercially successful video-sharing site often features among SNSs in journalistic and scholarly accounts (boyd and Ellison, 2007), and this fact appropriately captures some, but not all, of its characteristics. It indeed provides a range of social networking features, such as "Comments", "Favorite", "Share" and "Subscriptions". However, it also stands apart from other SNSs, due to it being a unique hybrid form of social media and mass media. Despite social networks that underlie or emerge from its use (Lange, 2007b; Rotman and Golbeck, 2011; Sifman, 2011), YouTube originally has a character of "a media distribution platform, kind of like television" (Burgess and Green, 2009, p. 3). Burgess and Green (2009, p. 3) argue that "YouTube's ascendancy has occurred amid a fog of uncertainty and contradiction around what it is actually for." It is, so the two authors' argument goes, a new breed of businesses, or an example of what Weinberger (2007) calls "meta businesses", where old media heavyweights and amateur content creators form a curious cohabitation (Burgess and Green, 2009, pp. 4-5).Given that its format induces user participation, YouTube along with other Web 2.0 models revitalized the early internet idealism for better democracy (Bruns, 2008; Jenkins, 2006; Weinberger, 2007). As Marwick (2007) also pointed out based on a content analysis of news coverage, YouTube has been portrayed in a rather celebratory tone in the mass media for its democratic (or democratizing) potential. This kind of portrayal has been supported by a handful of anecdotal cases. During the 2006 midterm elections in the USA, a YouTube-publicized gaffe, later called the "macaca moment", cost Republican Senator George Allen (Virginia) his re-election bid, despite his huge initial lead in the opinion polls over the Democratic challenger, Jim Webb. This result was attributed to a video clip showing Allen making racially discriminatory remarks, targeting one of Webb's volunteers, during a campaign tour. The incident quickly became a widely publicized issue once the footage was posted on YouTube, which effectively led to Allen's defeat (Sidarth, 2006). Since then, the term "macaca moment" has been used to refer to "high-profile candidate gaffes that are captured on YouTube, receive a cascade of citizen views and contribute to some substantial political impact" (Karpf, 2010). A further example of such an incident took place in the 2007 Finnish national elections (Carlson and Strandberg, 2008, p. 171).The discussion of the political implications of YouTube intensified around the 2008 US presidential election because that was when Web 2.0 applications started to be incorporated into the mainstream campaign repertoire in the USA. Then-candidate Barack Obama received considerable media attention for his online presence, which outshone that of the rival candidate Hillary Clinton during the primaries and later that of his Republican opponent John McCain. Obama's campaign made heavy use of Facebook and YouTube to engage the attention of younger voters (Young, 2008), securing record campaign contributions mainly through online donations (Cooper, 2008; Carpenter, 2010).The 2008 election is widely considered as having been a pivotal moment in the US campaign history. Carpenter (2010) and Ricke (2010), for example, pointed out that YouTube, particularly its joint projects with PBS ("Video Your Vote") and with CNN ("The CNN/YouTube Debates"), afforded more room than ever for ordinary voters to participate in the campaign process and consequently served as "an instrument of 'checks and balances'" (Carpenter, 2010, 223). Some other scholars focused on how individual candidates made use of YouTube to deliver their messages in this particular election. Church (2010) examined leadership discourse by analyzing YouTube clips, featuring 16 candidates in the race, and suggested that given the emergence of what he termed "the postmodern constituency" (Church, 2010, p. 138), as well as the unfiltered nature of the medium (Church, 2010, p. 139), voters' focus shifted from candidates' political experience to their character. Duman and Locher (2008) examined Barack Obama and Hilary Clinton's YouTube campaigns and highlighted how the two presidential hopefuls attempted to create and uphold a conversational allusion through their videos.In regards to YouTube, another growing body of literature is concerned with the patterns of user interaction in this "viral marketing wonderland" (Burgess and Green, 2009, p. 3), and the methodological implications of exploring them. Three themes were identified for our research design. First, some studies have focused on what motivates users to share their videos on YouTube in the first place, ranging from perceived usefulness to interpersonal norms (e.g. Yang et al., 2010). Second, from a similar yet more specific perspective, some have established that YouTube is not only a source for information or entertainment for individual purposes, but also a new terrain for social interaction involving acts of "co-viewing" (Haridakis and Hanson, 2009), video responses (Adami, 2009), peer comments (Lange, 2007a; Jones and Schieffelin, 2009), and links to Facebook Walls (Robertson et al., 2010). In this sense, Chu (2009) went further to argue that YouTube plays a role as a "cultural public sphere".That said, others have taken a cautionary stance regarding YouTube's capacity to function as a public sphere. For example, Hess (2009) suggested there are certain obstacles to public deliberation on YouTube, such as the platform's dismissive and playful atmosphere. Moreover, Carpentier (2009) analyzed 16plus, a YouTube-like online platform provided by the north Belgian public broadcaster VRT, and suggested that the users may not be as appreciative of additional means of participation as expected. Lange (2007a) found that video-based communication on YouTube is no less hostile than faceless text-based communication. Blitvich (2010) added that impolite comments, although sometimes strategically employed, are likely to result in polarization.Chadwick (2009) argued that deliberative processes and "thick" citizenship cannot be the sole yardstick for the assessment of the functioning of e-democracy in the Web 2.0 era. However, if YouTube proves to be an unviable site for discussing serious political issues, it is important to ask why that is so. Answers to this question would advance our understanding of its political potential and how to harness it.
Research questions: As established in the previous section, YouTube is a uniquely interesting environment for political communication. What merits further attention is how this global platform intersects with local political and cultural dynamics. This is an underexplored line of inquiry, especially in the existing literature, which has been predominantly based on incidents from Western polities. With the aim of contributing to filling this lacuna, we address in the present paper the following three questions.1. What were the salient features of the discussion surrounding the YouTube clip, and how did that discussion develop during the campaign period?2. What were the salient features of the discussion surrounding the Daum clip, and how did that discussion develop during the campaign period?3. To what extent and how did the two discussions differ, and what lessons can be drawn from the differences (if any) in terms of harnessing the political energy of Web 2.0?
Research design: Data collection
Results: Patterns of interactions among users
Discussion: Q1. What were the salient features of the discussion surrounding the YouTube clip?
Conclusion and future research: The dialectics between global media and local contexts have indeed been well documented (e.g. Turner, 2005), and the internet has complicated the matter even further. Contrary to the general description of the internet as a supranational network of computers, what we have is, in boyd's (2006) words, "all sorts of local cultures connected through a global network, resulting in all sorts of ugly tensions."What is unique about the case studied in this paper is, however, that users turned to the global space in order to circumvent a local political conflict, not the other way around - hence the neologism "cyber-exile". During the 2007 presidential election, Korean voters used YouTube to share election-related information because the laws of their country prohibited them from doing so on domestic websites. YouTube provided them with a higher level of anonymity than Korean sites. This incident of cyber-exile illustrates the tension between internet-mediated grassroots political activity and the authorities' restrictive interpretation and application of existing laws - the POEA in this case - to curtail the activity (see also Lee, 2011).However, the discussion surrounding the YouTube video clip was brought "back" to the Korean cyberspace. Korean users are generally known for having a strong preference for local services (Lee, 2009, 312), but more importantly, YouTube could not provide a suitable environment for this particular discussion. Dialogs within YouTube's comment facility, among Koreans as well as non-Korean users, were often off the topic, and in some instances, unexpectedly turned into "racist flaming".Zuckerman (2010) analyzed various tools for circumventing the internet censorship worldwide and suggested that circumvention cannot be a long-term solution. His argument is focused largely on technical aspects, but it still leads to important questions of how, and to what extent, we then should go "beyond" circumvention.We, the authors of this paper, do not intend to advocate the "walled garden" model for online forums, as opposed to the wild of YouTube, nor do we wish to suggest there should be zero government intervention. Our findings indicate that users can circumvent local regulations via the internet and other digital communication technologies (and probably more will do so), but subsequent discussions are likely to become fragmented as a result. Despite the specifics of the case studied, the significance of the present paper lies in the fact that epitomizes the tension between "old" laws and "new" media. Moreover, our findings clearly demonstrate that an innovative circumvention attempt on the user's part is not enough to harness the potential of online discussion for measured, sustained discourse of the issue at hand.This study has an important limitation. We conducted the analysis on a real-time basis as the event unfolded, and therefore the need to compare YouTube and Daum arose only during the later stages of data collection. As a result, a point-to-point comparison was not feasible, which invites further research.Noteworthy is that on December 29, 2011, the Constitutional Court of South Korea declared the unconstitutionality of the NEC's extended application of Article 93(1) of the POEA to social media, particularly Twitter. With the next presidential election scheduled for December 2012, future research should examine the impact of this court decision on the long-awaited relaxation of restrictions on election-related online communication in the country.
Acknowledgements: An earlier version of this paper was presented at the Oxford Internet Institute's 2010 conference, and some of the findings (answering different research questions) were published in a Korean-language journal. The authors are grateful to Ae-Jie Bae for her assistance in data collection.
|
Reframing integration: Information marginalization and information resistance among migrant workers
|
[
"Intermediaries",
"Migrants",
"Integration",
"Information behaviour"
] |
Summarize the following paper into structured abstract.
Introduction: In this study, I present the findings from a qualitative investigation that explored the other side of the issue of integration of migrants; that is, the views and perceptions of information intermediaries working with migrants in Israel about the integration process. The notion of integration has been the focus of academic debate in recent years and has been defined in different ways (Ager and Strang, 2008; Cheung and Phillimore, 2014; Farach et al., 2015; Gilmartin and Migge, 2015; Harder et al., 2018; Phillimore, 2012). In its broadest sense, integration means the process or transition by which people who are relatively new to a country become part of society (Rudiger and Spencer, 2003). Integration is achieved when "people, whatever their background, live, work, learn and socialise together, based on shared rights, responsibilities and opportunities" (Ndofor-Tah et al., 2019) while keeping a measure of their original cultural identity (Threadgold and Court, 2005). Integration is being characterized today as a multidimensional and multidirectional (Harder et al., 2018) process that encompasses "access to resources and opportunities as well as social mixing" involving adjustments by everyone in society (Ndofor-Tah et al., 2019).
Theoretical direction and literature review: Social integration of migrants
Methodology: Population of the study
Findings: The content analysis of the interviews revealed three major themes: information marginalization includes data that describe the different factors that keep migrants at the social margins and the ways that this marginalization is reflected in their information seeking of everyday life information; information resistance includes data that describe the ways by which migrants hold off or rebuff accessing and receiving information; overcoming resistance includes data that reveal the ways by which migrants and social mediators try to overcome information resistance and ultimately information marginalization (see Table II).
Information marginalization: This theme has two main categories: elements of information marginalization that hinder migrants' social integration, and Ager and Strang's (2008) markers and means: three indicators of integration viewed through the perspective of information marginalization.
Elements of information marginalization: The categories comprised in this theme related to Ager and Strang's (2008) facilitators: language and cultural knowledge and safety and security. Findings from the content analysis showed that participants did not view these elements as facilitators of integration, but rather as factors in the migrants' lives that hinder their integration into Israeli society. This theme is comprised of four subcategories: lack of cultural knowledge, lack of language proficiency, living in an unsafe and insecure environment, and discrimination.
Information resistance: The second theme in this study is information resistance. Content analysis of the interviews revealed that migrants resist information as a defensive behavior in response to the unstable and unsafe situation they face in their host country and to their lack of cultural knowledge. This theme includes two subcategories: secrecy and disinformation.
Overcoming information resistance and marginalization: This theme consists of two categories that describe the efforts of social mediators to find new, relevant ways to communicate with migrants as well as the role that social connections play in overcoming information resistance and marginalization.
Discussion: This study presented a new and more nuanced understanding of the process of integration of migrants by re-examining Ager and Strang's (2008) framework from an informational perspective told by the intermediaries who work with these populations. Findings from the content analysis revealed that the process of integration is shaped by the inability of both migrants and the institutions in the host society to close the cultural and social gaps that ultimately result in information marginalization.
Conclusion: Phillimore (2012) wrote, "Integration implies the development of a sense of belonging in the host community, with some renegotiation of identity by both newcomers and hosts" (p. 3). Findings from the study showed that it is in this oftentimes failed renegotiation that information marginalization emerges. By allowing intermediaries to articulate their views and opinions, I was able to understand how personal, cultural and social factors impacted the lives of migrants, and distanced them from the sources of information and support they need to feel at home in their new country. Findings showed that for integration to be successful, it should be the result of an effort by both migrants and local institutions/structures to reach a middle ground of understanding and compromise. This study extending Gibson and Martin's (2019) notion of information marginalization to encompass both sides of the equation of marginalization and situating it in the social, economic and contextual conditions that create information poverty.
|
Narratives of (in)active ageing in poor deprived areas of Liverpool, UK
|
[
"United Kingdom",
"Liverpool",
"Elderly people",
"Social policy",
"Poverty",
"Deprivation",
"Active ageing",
"Narratives"
] |
Summarize the following paper into structured abstract.
Introduction: I see they've been killing cats again [...] "Better bring mine in, then.
Background to the case study: Demographic change has stimulated reviews of concepts of ageing, with "active" ageing emerging as an important focus (WHO, 2002; Walker, 2010; DESA, 2011). Activity in older age may be limited by ageism, poverty in its many guises including ill-health (Howse et al., 2011) and policy and resource constraints which inadequately support older people's welfare (Dean, 2012; Lymbery, 2012). Disparities between poverty and wealth prompt examination of social justice and fairness (Harvey, 1973; Rawls, 2001; Barry, 2005; Dorling, 2011). A minority of older people are wealthy; others rely solely on the state retirement pension; others have occupational pensions and some savings, but these may be eroding relative to living costs (HC, 2009, 2012). Internationally and nationally, the elderly are likely to be among societies' poorest (Price, 2006; Walker, 2009; McKee, 2010; DWP, 2012; DESA, 2011).Phillipson (1998) explores changing ideology and social policy relating to ageing post-1945 with ageing's demands becoming individualised in a postmodern setting and generational interdependence being questioned. The complexities of work-retirement transitions, age discrimination and the changing roles of older people in work and society are meriting critical attention (Phillipson, 2004c, a; Macnicol, 2005). Coleman et al. (1993) illustrate negative calculations of age-related costs and changing elder support social networks, including family. Factors converge: global recession; neo-liberal policies of out-sourcing welfare to the private sector and voluntarism; and post-modern discourses of choice and consumerism (Walker, 2005; Coote, 2009). In different cultural settings, older people may find such socio-political change difficult (Moffatt et al., 2011; Miles, 2009).Concepts of ageing
The study areas and people: Most participants were primarily dependent on the state retirement pension. Additionally, ageing in poor neighbourhoods confers multiple disadvantages (Scharf et al., 2003) and the study areas are among the most deprived in England (CLG, 2010b; LCC, 2004-2011). During the study period, 2002-2007, Anfield, Clubmoor, Everton, Speke-Garston[3] and Tuebrook-Stoneycroft were among the poorest wards in Liverpool on measures including household income and income deprivation affecting older people, both pointers to other deprivations. Despite improvement since 2003, significant areas in wards remain within the poorest 1-5 per cent in England (LCC, 2011): most (89 per cent) of Everton is within the poorest 1 per cent.High levels of poverty and deprivation persist in Liverpool, the once "proud second city of empire" (Belchem, 2006, p. 9). Table I illustrates the ranking of major cities in England on the Index of Multiple Deprivation (IMD) (LCC, 2011). Between 2007 and 2010, Liverpool remained the most deprived major city in England, in depth (concentration) and geographical extent of deprivation.Liverpool's overall weak IMD ranking suggests that low ranking areas within the city are considerably deprived (Table II). Socio-economic polarity within Liverpool is also evident. Among the study wards and Church, a comparator ward, Everton ranks first (worst) in Liverpool on most measures. Within other study wards, such as Tuebrook-Stoneycroft, there are small areas of relative affluence but deprivation and ill-health are more evident; both have been associated with poverty (Howse et al., 2011). Years of male life expectancy range from Anfield (72.9) to more affluent Church (83.8), with Liverpool at 78.8 years and England, 82.0 years. Worklessness, rates of benefit claim, low incomes and weak educational performance suggest current and potential future deprivations.
The research methodology: Between 2002 and 2007, ACL commissioned studies in five deprived wards to inform policy-making. Liverpool PCT, part of the NHS, contributed informants. The work focussed on older people's needs, aspirations and barriers to active ageing.Participants and informants
Active ageing: participation in social and health activities: There are other factors, but lots of exclusion (from social and health activities) results from low incomes [...] though people may be too proud to admit it [...] income maximisation among older people is not only empowering, it can be a life saver. It means less anxiety and depression and more [...] fresh food, keeping warm, getting out [...] (Key informant).This observation was illustrated by comments such as: "keep-fit sessions cost a quid!" (widow aged 60); and "you've to spend PS25 now on a week's shop to get free delivery!" (elderly couple). The problem here was not cash flow, but long-standing forced economy. The following sections explore participants' understanding of "active ageing", patterns in their health and social activities and the implications of the findings for social policy.Active ageing: meanings
The home and active ageing: Satisfaction with "home" was important to well-being and active ageing. It could be the springboard to outside engagement, a place from which to contact the outside world through reading, telephones, computers in some cases, and somewhere to welcome family and friends. The home could be the repository of memory, an essential feature of personal security and identity, indeed as one participant commented, "my home is me".Home could signify loneliness, however, especially if mobility and health were difficult. It could be insecure within, if coping were problematic, and outside in troubled neighbourhoods. Across the wards, between 12 and 18 per cent of participants said they felt insecure in their homes, mainly through fear of burglary. Visits from family and friends were "cheaper than going out" but lack of maintenance, cleaning and adequate heating could constrain the reception of visitors. Home help services enabled many to remain in their homes, but there was fear of elder abuse and reluctance to allow help from strangers.Reaction to sheltered accommodation varied, depending mainly upon the previous home's characteristics, health, and whether a partner was alive. Loss of independence could sadden but for others, security and less "worry" compensated. The worries included burglary, "I can open my window now for fresh air"; organising budgets and home maintenance. Some older residents felt less lonely and happier. Others were ambivalent, liking the greater security, but disliking being observed, within others' structures of time, place and communal living. Some participants were currently homeless, primarily because of poverty and poor health. Homelessness was commonly preceded by intense difficulty, in relationships, mental and physical health and finance, although housing associations appeared generally more supportive than private landlords.
Conclusions: "(In)active ageing" relates first to the contrast between narratives of older person as burden in relation to our observations of the pivotal role of many older people in family survival, working part-time, providing family care, or both. Some participants joked about their "inactivity", then gave accounts of selective disengagement, from post-employment "busy-ness" and "day-time television", to re-engagement in following interests: pigeons, local history and gardening were three examples. The title also recognises the barriers to active ageing, significantly built from lifetime inequity, in families and neighbourhoods.Lie et al. (2009) comment that older people's volunteering cannot substitute for funded, sustained and coherent care systems. In the UK, the Government's Big Society project (2010) has been criticised on a number of grounds: as rhetoric and cover for spending cuts (Corbett and Walker, 2012); contributory to the dismantling of the welfare state (Bone, 2011); and illustrative of short-term-ism (Ware, 2012). The "Big Society" is a wholly inadequate response to the needs of the deprived neighbourhoods we have studied. Here, the two major barriers to active ageing are long-term ill-health and disability underpinned by poverty. As Table I has indicated, Liverpool's deprivation is severe and persistent, so measures to effect improvement need to be well-supported and long-term.
Our minimum agenda for concrete action in the UK would include: * The setting of a national minimum income for active and healthy living (O'Sullivan and Ashton, 2011).* Ensuring that incomes at least met this standard.* Emplementing the Marmot proposals (2010) for a "fair society" with reductions in health inequity from the beginning of life to the end.* Restoring the retirement pension - Retail Price Index link.* Establishing a National Care Service to address one of the major needs of older people and their carers.* Establishing a clear and effective focus within government for oversight and development of policy affecting older people.* Ensure that urban and rural areas are "friendly" to older people (ILC, 2011), which also means that they would be friendly to all.* Providing core funding for a range of opportunities in neighbourhoods for active participation by older people (Deeming, 2009).Lymbery (2012) points out that, especially in recession, an adequate governmental response to older people's needs is unlikely. We share that view. The conditions in the study areas are evidence of lack of social justice and the position is likely to deteriorate with further cuts in spending. Many informants were angered at the likely prospect of deterioration of already inadequate provisions in the wards, which is a fair response. In disseminating knowledge of Liverpool's situation, we hope to contribute to a growing body of knowledge and challenge about ageing in places characterised by these levels of poverty and deprivation.
|
Children's perceptions of obesity as explained by the common sense model of illness representation
|
[
"Qualitative research",
"Children (age groups)",
"Obesity",
"Individual perception"
] |
Summarize the following paper into structured abstract.
1. Introduction: Childhood obesity is a growing global health concern, with physical, emotional and social consequences frequently persisting into adulthood (Daniels et al., 2005; Must and Strauss, 1999; Doak et al., 2006; Reilly et al., 2003). Wang and Lobstein (2006) report that the obesity epidemic seems particularly prevalent among school age children. Accordingly, the World Health Organisation reports that an estimated 22 million children under five years old are currently obese, and that even developing countries are facing a similar problem. For example, in Thailand, the obesity prevalence rate for children aged five to 12 years old rose from 12.2 percent to 15-16 percent over the course of only two years (WHO, 2008). The rate of childhood obesity also seems to be on the increase, even in developed countries. For example, a WHO survey conducted in the USA revealed that obesity in children aged six to 11 years has more than doubled since the 1960's (WHO, 2008). In Australia, recent figures from the 2004 New South Wales Schools Physical Activity and Nutrition Survey (SPANS, Booth et al., 2006), indicate that 26 percent of boys and 24 percent of girls in the state of New South Wales aged between 5 and 16 years were overweight or obese. This is compared to the year 1985, when 11 percent of all young people aged 7 to 16 were overweight or obese (Booth et al., 2006).Childhood obesity has been shown to be associated with an increase in medical complications such as orthopaedic complications (Dietz, 1998), pulmonary consequences such as sleep apnea, asthma and exercise intolerance (Dietz, 1998; Ebbeling et al., 2002), and cardiovascular consequences, such as hypertension, dyslipedemia, chronic inflammation and blood clotting tendencies (Ebbeling et al., 2002). Children who are obese also face increased risks of non-alcoholic fatty liver disease (Daniels et al., 2005) and Type 2 diabetes (Daniels et al., 2005; Ebbeling et al., 2002).In addition to these physical consequences of obesity, children who are obese have also been shown to experience more psychosocial disturbances such as decreased social interaction, depression, impulse control problems and decreased perceived cognitive and athletic ability (Ells et al., 2006). Furthermore, Franklin et al. (2008) report a finding that obese boys and girls appeared to have lowered self concepts, with girls in particular experiencing significantly lower perceived social acceptance than their normal weight counterparts (Franklin et al., 2008). Accordingly, normal weight children's attitudes towards obese children or obese silhouettes have been found to be negative (Bell and Morgan, 2000) and unfavourable (Staffieri, 1967).Research has shown that accurate and developmentally sensitive understandings of chronic illness in children is associated with enhanced satisfaction, an improved emotional state, a better quality of life and, most importantly, better compliance with treatment (Veldtman et al., 2000). For children this understanding is contingent on a number of factors such as developmental levels and illness experience. It is a widely replicated and supported view that children's perceptions of illness vary in both content and depth, particularly as a child's understanding of their illness cognitions appear to be linked to their developmental level (Bibace and Walsh, 1980). Thus, attributions that children make about their own illness may be completely different from that of an adult or parent.For example, in the obesity/overweight literature, it has been found that adults understand a wide number of possible causes for obesity. These include junk food advertising during children's television viewing, genetic and endocrine factors, environmental factors, imbalance between energy taken into the body and energy expended, as well as increased calorie intake through diet and sedentary lifestyles (Saelens and Daniels, 2003). In an Australian study of lay persons' understanding of childhood obesity, Hardus et al. (2003) found that the public had similar sophisticated perceptions of the causes and preventions of obesity. Over half of the adults surveyed endorsed a view that media promotion of unhealthy foods and over consumption of fast food were the main contributors to childhood obesity.Despite this, it appears that the only aspects of children's views of obesity that have been investigated are perceived causes In a rare study examining childhood conceptions of the cause of obesity, Johnson et al. (1994) found that elementary school-aged children (in grades 1, 3 and 5) lacked much factual knowledge and held many misconceptions. For example, although children could identify "junk" foods as a cause, their definition of junk food included items like bread, pasta, potatoes, bananas and chicken. Many were confused about how fat, meat, salt and cholesterol led to obesity. In fact some thought that cholesterol was a type of food. Only 7 percent identified lack of exercise as a cause of obesity though many had no idea how exercise kept one from getting fat, or how a sedentary lifestyle could make a person fat. Although 4 percent mentioned calories as a cause of obesity, children did not know what calories were. Other causes of obesity identified were drinking too much milk or water and mother to unborn child transmission (Johnson et al., 1994). Thus, factual knowledge seemed to be limited, with children not fully comprehending the content of information, or how certain factors work to contribute to obesity.With regards to examining childhood conceptions of disease, an accepted, holistic and current framework is the common-sense model of illness representation (CSM; Leventhal et al., 1980). The CSM is primarily designed as an explanatory framework for the way people perceive and react to disease threat (Leventhal et al., 2003; Conner and Norman, 2005). It is hypothesised that these "lay views" of illness are formed along five dimensions, or interpretive schemas (Hagger and Orbell, 2003). The five dimensions of the CSM of illness representations are:1. Identity. Individual statements, beliefs, and labels for the illness. Leventhal et al. (2003) posit that this domain is critical as it joins feelings of vulnerability to general symptoms.2. Cause. Perceived causative agents of the disease, which may include genetics, lifestyles or infections.3. Timeline. The perceived duration of the illness.4. Consequences. The perceived impact that the illness has on overall lifestyle.5. Control/cure: Refers to perceived efficacy of treatment (Hagger and Orbell, 2003) as well as perceptions of how preventable, controllable or curable a condition is (Leventhal et al. 2003).Empirically, the CSM has been validated as being applicable to many illnesses, as it was developed primarily to apply to the health field (Leventhal et al., 2003). When applied to children, the CSM forms a good alternative to the Piagetian framework which is frequently used in the study of children's perceptions of illness (Eiser, 1989). First, the CSM addresses more domains of illness than simply causative factors, a criticism made of applications of the Piagetian model (Eiser, 1989). Secondly, the CSM allows for these five schema structures to be reshaped and develop in breadth as new information is gained from experience and other sources (Leventhal et al., 2003), a notion especially pertinent to children as they age and become more sophisticated in their cognitions.Empirically, previous studies have found that the CSM can be used to effectively describe children's perceptions of illnesses. For example, Goldman et al. (1991) reported that responses given by children aged four to six about the common cold tapped into the five domains articulated by the CSM. In addition, Paterson et al. (1999) also discovered that seven to 14 year olds' discussions on the common cold and asthma could also be conceptualised in terms of the illness representations model.
2. Aims: The aim of the current study is to examine children's understandings of obesity. The current study aims to explore childhood perceptions of the identity, cause, timeline, consequences and cures/control of obesity.
3. Method: The current study recruited a total number of 33 children. Participants were as follows: 12 children aged seven to eight (36.4 percent), 13 were aged nine to ten (39.4 percent) and eight were 11-12 (24.2 percent). Age groups correspond to primary school grades 4, 5 and 6 respectively and are the group for which childhood obesity has shown the greatest increases over time (Behn and Ur, 2006; Wang and Lobstein, 2006). There were 20 boys in the sample. The final sample represented nine overweight/obese children (27.3 percent) and 24 normal weight children (62.7 percent). Children were recruited through the Catholic Schools system as well as a sample who were seeking appetite awareness training in Sydney, Australia. Children in this latter option were included in the study on the basis if their body mass indices fell within the childhood "overweight" or "obese" range defined by Cole et al. (2000). Children were interviewed using a semi-structured interview protocol which was designed to reflect the five dimensions of the CSM. Ethics approval for this study was granted by the University Human Ethics committee as well as the CEO of an Archdiocese of Catholic Schools in Sydney, Australia.The final sample is shown in graphical form in Figure 1.3.1 Materials
4. Results: 4.1 Identity
5. Discussion: In the current study, children of primary school age were interviewed on their views of obesity as guided by the five dimensions of Leventhal et al.'s (1980) common sense model of illness representations. In general, it was found that children were quite knowledgeable in all areas.The most obvious identity feature of obesity depicted by the children was a large stomach. If the negative connotations of stereotyping are ignored, this is encouraging as children may be referring to a large waist circumference, which is associated with the most health complications in obesity (Wardle et al., 2008). However, the other identity features of obesity listed by the children were also primarily negative in nature. Such findings are consistent with previous research that have found negative societal attitudes of overweight people found in adults (Bell and Morgan, 2000; Pingitore et al., 1994; King et al., 2006; Hebl and Xu, 2001), and children (Hill and Silver, 1995; Staffieri, 1967).Children correctly identified consumption of junk food, overeating and a non-engagement in activity and exercise as prominent causes of obesity, reminiscent of findings by Eiser et al. (1983) whereby children were able to identify being "healthy" as exercising, having a balanced diet and being energetic in general.Almost half of all children interviewed did not mention sedentary behaviour as a cause of obesity. A similar finding was reported by Johnson et al. (1994). This finding has also been reflected in a recent study conducted by the NSW Schools Physical and Nutrition Survey (SPANS) (2004) (see Booth et al., 2006). The study reported that despite there being a recent increase in school-aged children fulfilling recommended exercise requirements over the year 2004-2005, there was also a corresponding increase in children undertaking sedentary activities such as television, videos/DVDs, and electronic and computer games (Booth et al., 2006). This generally poor understanding of sedentary behaviours is of particular interest as sedentary living is one of the major tenets in recent media campaigns by the NSW government to decrease the childhood obesity rate. Perhaps such interventions need to directly address how sedentary behaviours lead to obesity, and encourage children to identify their own patterns of sedentary behaviours.The majority of children in the current sample stated that the timeline or duration of obesity was reliant on people undertaking positive health behaviours. This could also be interpreted as reflective of an internal locus of control whereby the person is seen as being responsible for their own weight loss. As such, it may be that children believe that by either not engaging in these positive behaviours, or, alternatively having failed in weight loss despite these behaviours, that people may have "done the wrong thing". On the basis of their understanding of the causes of obesity, it appears that children seem to believe that the duration is short if people take appropriate action. This internal attribution to weight loss may mean that they fail to capture the multifactorial complexity of the issue.It was noted that normal weight children were more adept than their overweight counterparts at detecting the most severe consequences of obesity. First, it may be possible that the parents of overweight children protect them from such negative consequences, as it is understandable that parents would aim to minimise distress in their children. Second, it may also be that overweight children and their parents underestimate the true risks of obesity, as found by Jeffery et al. (2005).Lastly, the cures/control section of the interview yielded the finding that most children endorsed the idea that exercise was a potent cure for obesity. This finding may be seen as reflecting a general sense of an internal locus of control - in that it is seen as a person's own responsibility to carry out this behaviour to lose weight, and that exercise, being a vigorous, visible activity, is the most obvious behaviour to contribute to this. However, this is also limiting in that exercise is not the only component contributing to healthy lifestyles, and needs to be undertaken in conjunction with other positive health behaviours. Hence, the multiple components of obesity and overweight also seem to be underappreciated by children in the current study. Further interventions should make clear the multifactorial complexity of obesity to children, and further highlight the nature of sedentary behaviours in maintaining this condition.The current study was limited in that most of the sample came from the Catholic education system in Sydney's eastern suburbs, which are more affluent in terms of socioeconomic status and therefore impacts on the study's external validity. Nonetheless, recent figures from the Australian Bureau of Statistics (www.abs.gov.au) indicate that obesity and overweight actually tends to be more prevalent in low income households and living areas of greatest disadvantage. Future research in this area should therefore aim to recruit participants from areas of lower socio-economic status. In addition, a broader sample could also be obtained from public education settings. On a related note, the treatment-seeking sample in the current study also presents with limitations in the sense that results may be biased and not representative of the wider population. With this respect, wider sampling in a number of areas would be a further avenue to consider in future studies.In conclusion, this study has qualitatively examined children's views on facets of obesity as pertinent to the common sense model of illness representations. Results found that children do indeed have well developed perceptions of the identity, cause, timeline, consequences and control/cure of obesity but which nevertheless have some qualitatively different deviations from "expert" or "adult" understandings of these facets. Future obesity prevention interventions should take these childhood perceptions into account when considering improved adherence to regimes.
|
The hegemonic gender order in politics
|
[
"Gender",
"Discourse analysis",
"Politics",
"Critical feminist perspective"
] |
Summarize the following paper into structured abstract.
Introduction: Despite the ongoing increase of women in the top positions of hierarchy, they continue to be underrepresented in politics occupying 19.5 percent of seats worldwide, 22.8 percent in Europe, 22.6 percent in the Americas and 42.0 percent in Nordic countries (Inter-Parliamentary Union, 2017). In 2017, Italy ranks 44th out of 193 in the world classification compiled by the Inter-Parliamentary Union (2017), with a participation rate of Italian women to res public between 28 and 31 percent. Although the number of women elected in central institutions is increased compared to the past, in Italy women are still low represented in politics (Massa, 2013), and the number of women politicians decreases in the hierarchy of power (Fornengo and Guadagnini, 1999; Bonomi et al., 2013). Every president of the Italian Republic has been a man and women who have served in the institutional role of president of the Chamber of the Italian Republic from 1948 to date are 3 out of 14 (Italian Parliament, 2017). This position of women in politics, characterized by both a numerical increase compared to the past and a numerical decrease at the top of the power hierarchy, is well represented by the current government. The XVII Legislature of government, which started in 2013 and close to conclusion, is the legislature with the largest number of women in Parliament, in fact the women account for 32 percent of deputies and 30 percent of senators, but only 16 percent of women cover the roles of leader, president of the commission and the office of the presidency. Analyzing the composition of the Italian Parliament, it is clear that the disparity between men and women in politics is recognizable when roles are more prestigious. This condition is also confirmed at the regional level: women are present in the joint sessions in 29 percent of the councils and hold key positions (departments and offices with greater decision-making power) in 18 percent of the councils, while only 10 percent is president of the region and only 2 percent are leaders of the municipalities in the capital city (Openpolis, 2015).
Gender, power and discourse in politics: The studies on women and politics continue to focus on women' numbers in politics and on sex differences, and with difficulty, they are able to embrace the concept of gender (Kenny, 2007) as a category that structures and makes sense of particular social practices. However, feminist studies have developed complex theories of gender, including both patriarchal and discursive conceptions of power. The power inequality between gender relies on the concept of patriarchy, a system of social structures and practices of domination founded in the subordination of women by men (Walby, 1990). Gender relations are power relations involving formal public structures as politics and private structures as the family. The distribution of power between gender, based on hegemonic patriarchal culture, which associates naturally the female to the house and the male to the labor and political activities, has determined an asymmetry in which the historical forced absence of women from prestigious positions, such as politics, would involve a vicious circle whereby the greater female participation in domestic work has become cause and consequence of their exclusion (Walby, 1990). Through the internalization of gendered norms produced routinely in discourses of everyday life, this gap of power between men and women has become invisible, misrecognized and recognized as legitimate and natural (Bourdieu, 1991), contributing to the consolidation of "hegemonic masculinity" that preserves, legitimizes and naturalizes men power and, consequently, women subordination (Connell, 1987, 2016; Connell and Messerschmidt, 2005). Discourse, therefore, plays a significant role in the reproduction of dominance, which is considered as an exercise of social power by elites, institutions or groups that have the effect of increasing social, political, class and gender inequality (van Dijk, 1993, 2011). Social power is enacted, reproduced or legitimized by the discourse of dominant groups or institutions (van Dijk, 1996).
Methodology: This study is based on 30 biographical interviews conducted with local Italian politicians, 15 of whom were men, 15 women. We chose to involve local politicians because figures on the number of women in local Italian institutions are lower than those at the national level (ISTAT, 2017).
Who is to blame for the gender gap in politics?: Data analysis show how the dominant gender order is performed and reinforced through the discursive practices of the politicians interviewed. The participants tell their experience as politicians and produce discourses that clarify the norms that regulate the political field. These discourses are part of a wider social context in which choices, roles and careers are gendered.
Conclusions: This paper explores the topic of the gender gap in politics through a discourse analysis of a group of Italian politicians and shows the patterns whereby a dominant gender order is constructed and reproduced. Discourse analysis discloses some interpretive repertoires used by men and women to confirm and reinforce the hegemonic gender order.
|
The role of frequent engagement in alliances in firm likelihood to patent: First wave alliances in UK bio-pharmaceuticals
|
[
"Innovation",
"Pharmaceutical industry",
"Strategic alliances",
"Patents",
"Biotechnology",
"C33",
"M10",
"O32",
"D74"
] |
Summarize the following paper into structured abstract.
1. Introduction: As the popularity of strategic alliances is increasing and such alliances are becoming an integral component in business development, attention in the research community has moved towards an exploration of their role in firm performance and innovation. Formal and informal interactions with external actors have long being argued to play a fundamental role in firm innovation (Von Hippel, 1988; Frankort et al., 2012; Rice et al., 2012; Colombo et al., 2011; Demirkan, 2018). However, it is accepted that alliances carry coordination costs and risks of misappropriation, and these frequently diminish chances of success or preclude full acquisition of anticipated benefits (e.g. De Man and Duysters, 2005; Gkypali et al., 2017; Faems et al., 2010). A substantial body of literature finds a positive relationship between the extent of alliances and firm innovation performance, whilst other work identifies diminishing and even negative returns (Hoang and Rothaermel, 2005; Sampson, 2005; Laursen and Salter, 2006; Rothaermel and Deeds, 2006). As a result, a growing strand in the literature has examined the factors enabling firms to generate and capture value from alliances. One such factor is alliance experience accumulation: as firms develop greater experience in managing alliances they become better at coordinating cross-organisational tasks and knowledge flows (e.g. Sampson, 2005). Another factor that can improve performance in alliances is developing formal and codified processes and routines for alliance management (e.g. the establishment of dedicated alliance functions), which are argued to capture, or to be reflective of, firm-specific alliance management capabilities (Kale et al., 2002; Kale and Singh, 2009, 2007; Heimeriks and Duysters, 2007; Sampson, 2005; Schreiner et al., 2009; Di Guardo and Harrigan, 2016; Shukla and Mital, 2018). Whilst these contributions offer useful insights, much the greater part of current theorising has been constructed via the use of cross-sectional data. Longitudinal explorations are scarce, thus we lack a nuanced understanding of the type of changes that occur within firms over time with respect to enhanced innovation potential from engaging in alliances.
2. Theoretical background: alliance experience and alliance capabilities: To explain heterogeneity with respect to firm's abilities to benefit from alliances, the alliance literature resonates strongly with knowledge and capability-based theories of the firm (e.g. Kogut and Zander, 1992; Helfat and Peteraf, 2003). Here, we adopt a perspective that is informed by both evolutionary theory of the firm (Nelson and Winter, 1982) and dynamic approaches to the resource-based view (RBV) (Helfat et al., 2007; Helfat and Peteraf, 2003). These approaches take a dynamic view of organisational development, emphasising the role of experience and knowledge accumulation in supporting improved management and coordination of organisational tasks and activities. Given their dynamic and longitudinal orientation, they are closely aligned with our own analytical approach, informing our exploration of the roles of alliance experience accumulation and frequent engagement in alliances in organisational learning and enhanced innovation.
3. Alliance experience and firm returns from alliances: Firms with greater experience can draw from a greater pool of situations about what has worked in practice, when making decisions and inferences with respect to the performance of organisational practices (Levitt and March, 1988; Argote et al., 1990). Alliance experience (cumulative number of alliances) improves firms' abilities: to manage and coordinate alliances, to improve coordination of inter-organisational relationships and joint tasks, to form efficient arrangements for knowledge sharing, to deal effectively with unforeseen contingencies and to identify ways to overcome and resolve inter-partner conflict (Anand and Khanna, 2000; Sampson, 2005; Belderbos et al., 2015; Rothaermel and Deeds, 2006). Due to the link between alliance experience and organisational learning, experience is seen as a fundamental antecedent to both alliance and alliance portfolio capabilities (Wang and Rajagopalan, 2015; Kale and Singh, 2009; Shukla and Mital, 2018). Literature suggests that firms may not be in a position to benefit from learning from experience and superior coordination of alliances, when facing power asymmetries and resource dependence in alliances. Conflict is more frequent in such alliances which affects value creation and capture, especially for the weak partner as they are at a comparative disadvantage (Diestre and Rajagopalan, 2012). Power asymmetries are likely in the bio-pharmaceutical sector, as large pharmaceutical firms may be collaborating with small-dedicated biotech firms and due to their longer commitment to alliances and historic investments in downstream capabilities may be at a comparative advantage in deriving value from alliances (Caner and Tyler, 2013).
4. Frequent engagement in alliances as an antecedent to alliance capability: Deciphering, coding and measuring capabilities is notoriously difficult (Godfrey and Hill, 1995). As a result, the alliance literature has, in the main, relied on identifying alliance management practices (e.g. alliance functions) as a way of documenting alliance capabilities (Kale et al., 2002; Kale and Singh, 2007, 2009). An exception is found in the work of Rothaermel and Deeds (2006). They explore an inverted U-shaped relationship between cumulative alliance experience and new product development. They argue that, as the inflection point of the inverted U-shaped curve corresponds to the level of experience beyond which firms start experiencing inefficiencies in alliance management, it can reflect the level of their alliance capability.
5. Sample and methods: 5.1 Sample
6. Estimation and results: Table I presents descriptive statistics and bivariate correlations.
7. Tests of robustness[6]: The results suggest that there are diminishing returns to cumulative alliance experience. This can signal that ageing experience contributes less to current outcomes and that recent experience may have a higher contribution (Shukla and Mital, 2018; Sampson, 2005). Following Sampson (2005), we explore the contributions of recent and past alliance experience. We develop a set of variables capturing alliance experience between 1 and 6 years prior to our year of observation. So, for example, in 2001 alliance experience of one year corresponds to the number of alliances initiated in 2000 and alliance experience of two years corresponds to those initiated in 1999 and so on. None of these variables appear to be significant, with the exception of alliance experience of four years prior to the year of observation, which appears with a negative and significant sign. Therefore, the diminishing returns to cumulative alliance experience identified in our paper cannot be attributed to decreasing contributions of distant experience. Our results most likely reflect that firms, by just forming more alliances and learning from experience how to improve alliance management and coordination, cannot experience improved efficiency ad infinitum.
8. Discussion: The paper contributes to the literature on the role of alliances in innovation and alliance capabilities by using a longitudinal analysis which enables capture of the impact of organisational learning in managing and coordinating alliances on innovation. As it is difficult to capture learning, the paper uses a longitudinal approach to capture dynamic changes within firms and it observes learning through its impact on outcome variables such as innovation. The paper adds to a slender body of work exploring antecedents to firm-level alliance capabilities and their impact on enhancing firm's abilities to innovate (for a review see Wang and Rajagopalan, 2015).
9. Implications for management theory and practice: Our research contributes to the literature on the antecedents of alliance and alliance portfolio capabilities and on the conditions that shape superior firm outcomes from alliances, such as innovation. It suggests a need to delve into the antecedents to alliance capabilities, a thin body of research and to identify nascent factors that provide potential foundations for alliance capability development (Helfat and Peteraf, 2003), shifting attention away from the role played by alliance management practices (such as dedicated alliance functions) that have dominated existing research. This is particularly important as firms may establish such practices during the advanced stages of the alliance capability development process (Kale and Singh, 2009), and as such they may not appropriately reflect the foundational stages of alliance capability development. Here, we echo calls to delve deeper in understanding alliance management capabilities and the antecedents to alliance portfolio capabilities (Wang and Rajagopalan, 2015), especially in the context of alliances involving a higher learning potential (Heimeriks, 2010). Moreover, recent research shows that codification of alliance learning and systematic approaches to alliance management contribute to efficient partner selection and alliance termination, but may restrict flexibility and adaptability which are important for efficient management during the course of the alliance (Heimeriks et al., 2015; Wang and Rajagopalan, 2015).
|
Smartphones and wine consumers: a study of Gen-Y
|
[
"Wine",
"M-commerce",
"Social Networks Systems",
"Y-generatio"
] |
Summarize the following paper into structured abstract.
Introduction: "Omnichannel" has become a buzzword in retail for good reason. New technologies, such as mobile devices and social media, combined with better data, bring the long-time dream of a unified cross-channel shopping experience within reach. In practice, however, most retailers still fall short of achieving this vision, especially as it applies to Generation Y ("Gen-Y" - those people born between 1980 and 1991) and their use of smartphones. Mobile users are increasingly accessing social media using mobile devices, whether via browsers or apps. A study by Adobe (2013) among mobile users in the USA, Canada, UK, France and Germany found that most had accessed social networks using a mobile device, ranging from 94 per cent for those 18-29 years of age to 75 per cent of those 50-64 years of age. In fact, Facebook was the second most visited Web site/application that was accessed by smartphones and was the top smartphone app in the USA in August 2013 (ComScore, 2013). In countries, such as Italy and Germany, penetration rates of mobile phones exceed 100 per cent, with some consumers owning more than one mobile phone (Kaplan, 2012). Mobile phones and devices are increasingly used in conducting mobile commerce (Venkatesh et al., 2003, Ngai and Gunasekaran, 2007). Reputation is one of the reasons why customers rely on particular Web sites, apps or Web-apps, and some studies have investigated trust and reputation issues in a mobile ad hoc network environment (Lax and Sarne, 2008). Newman (2010) found that some 700,000 people view wine-related videos every month; there are over 7,000 wine tweets per day and > 300 iPhone apps for wine. Breslin (2013) estimated that 90 per cent of wine drinkers use Facebook 6.2 hours per week.
Brief presentation of Gen-Y's usage of social media embedded on smartphones: Who is a member of Gen-Y?
Confirmatory study of Gen-Y behavior with social media and m-commerce: Research method
Time and access to the Internet: Among the 190 respondents, 68 per cent of them connect to the Internet using their mobile and 38 per cent spend between 10 and 19 hours a week on the Internet, whereas 41 per cent spend more than 20 hours a week. Nearly all of them (95 per cent) access the Internet every day.
Purchase behavior and influence of peers' recommendations: Only 43 per cent purchase on the Internet at least once a month. They mainly use the Internet to stay in touch with friends and relatives (95 per cent), but also to look for information on a product (69 per cent). They also access discussion groups (29 per cent) and chat rooms (21 per cent).
Wine purchase and consumption: Regarding the wine consumption and habits, 7.4 per cent of our respondents are members of a group dedicated to wine. Wine purchases range from "don't buy wine" (32.6 per cent) to 36.8 per cent "buy wine several times a year". With regards to wine buyers only, they mainly buy it in supermarkets (56.3 per cent) and hypermarkets (19.5 per cent), but also in wine shops (19.5 per cent). They prefer to consume good wines:
Wine purchase and m-commerce: The Gen-Y consumers (56/190) who frequently buy wine (several times a month or more) consider the usefulness of the information regarding a specific wine (3.2/5) as important when looking up information through their mobile. This generation also rates highly:
Discussion, limitations and future research: As novice or potential wine consumers, members of Gen-Y are becoming increasingly significant targets for wine marketers (Mueller and Charters, 2011). This paper looks at the current state of m-commerce, the consumption of smartphones and social media and the transformation of the consumer into an omnichannel shopper. It also examines some responses to this emerging way of doing shopping which enriches the in-store experience with digital integrations, especially in relation to Gen-Y consumers. Armed with smartphones and tablets, wine shoppers go back and forth effortlessly between the real (whether in hypermarkets or supermarkets, convenience stores, wine shops, at the estate/winery or during a wine fair) and digital worlds (through the Internet, mobile app, wine club or by mail order). They are using their phones while in stores to research products and compare prices, and they order online and then pick up in person. At the same time, they consult friends near and far whenever they may find themselves contemplating a purchase, such as a nice bottle of wine. Every day, more of them come to expect a mobile or an "omnichannel" experience.
Limitations and future research: This topic is promising because France AgriMer (2012) shows that wine consumption is composed of 45 per cent of occasional drinkers (once or twice a week) for consumers below the age of 25, and between 25 and 34 years, 50 per cent are occasional drinkers. The Wine Market Council has also published data showing that the millennials (Gen-Y) were consuming 24 per cent of the total volume. Finally, based on AC Nielsen (2011) in March, Internet and mail-order are representing 10 per cent of the total sales in UK. Those data show that the Internet is more and more important, that Gen-Y members are occasional drinkers, and therefore to link those occasional drinkers with wine, it is important to develop pleasant platforms and interactive social networks. The advantage of providing good service on an m-commerce Web site translates into satisfied customers who will become brand advocates: they can refer other people to the wine grower. Internet consumers talk to other consumers about a good customer service experience. For a service, such as the purchase of wine from a particular wine grower who is already selling online and thinking of expanding into m-commerce, providing good customer service is a must. When customers tell other people about a bad experience, they do it on social networks to reach a large audience. This is why social networks must also be taken into account when planning an m-commerce strategy to avoid any negative buzz and therefore leverage a good e-reputation.
|
Towards global music digital libraries: A cross-cultural comparison on the mood of Chinese music
|
[
"Digital libraries",
"Cross-cultural",
"Chinese music",
"Mood perception",
"Music digital libraries",
"Music mood"
] |
Summarize the following paper into structured abstract.
1. Introduction: Music seeking and consumption are no longer confined by the boundaries of country, region or culture today (Lee et al., 2013). Music, as a cultural object, may be perceived differently by people from different cultural backgrounds, imposing a challenge on music digital libraries (MDL) in meeting the needs of a diverse audience (Weissenberger, 2015). Consequently, an increasing number of researchers have started to investigate various cross-cultural issues in the music information retrieval (MIR) and MDL fields. As people often seek music for emotional goals (Lavranos et al., 2015), music mood[1] has increasingly become a popular access point for music information in many MDL and online music services (Hu, 2010; Hu and Downie, 2007). This trend has raised questions regarding the applicability of music mood across cultural boundaries. Probably due to its subjective and context-based nature, music mood perception is often regarded as culture-dependent (Wong et al., 2009). A number of previous studies have compared mood perceptions of music in various cultures by listeners from different cultural backgrounds (e.g. Balkwill and Thompson, 1999; Fritz et al., 2009; Hu and Lee, 2012; Lee et al., 2013; Singhi and Brown, 2014; Egermann et al., 2014). Although specific findings vary, a general trend found in these existing studies is that listeners' perceptions of music mood can be influenced by their cultural backgrounds.
2. Background and related work: 2.1 A brief history and types of Chinese music
3. Mood representation models in MIR: There are mainly two kinds of models to represent mood in music psychology and MIR: categorical and dimensional. The former uses a set of discrete terms (e.g. "passionate," "cheerful") to represent the mood of a piece of music. The most classical model of this kind is Hevner's (1936) adjective circle where eight mood categories placed in a circle, with a set of terms in each category. The dimensional models, in contrast, represent mood with continuous values in a low dimensional space. Different models may have different dimensions, yet valence (i.e. level of pleasure) and arousal (i.e. level of energy) are among the most popular dimensions. The dimensional model used very often in MIR is Russell's (1980) model. The categorical and dimensional models have their own pros and cons. Yang and Chen (2012) summarize that categorical models are more user-friendly as they only consist of terms in natural language; whereas dimensional models are advantageous in terms of quantifying the intensity of moods. Notwithstanding the importance of dimensional models, categorical ones are more suitable for this current study, for the purpose of validating and comparing the values of mood metadata in the context of cross-cultural MDLs.
4. Research design: 4.1 The participant groups
5. Results: 5.1 Characteristics of participants
6. Implications for cross-cultural MDL/MIR design: As advocated by Weissenberger (2015), a flexible organization system is needed for MIR/MDL to meet the needs of different musical traditions. The results of this study have important implications for designing MDL of this kind.
7. Conclusion and future work: This study compares Hong Kong and US listeners' mood perceptions of 29 Chinese music pieces, with the goal of investigating whether and how mood perceptions of the two groups of listeners differ, and how the differences can inform the design of cross-cultural MDL. Music mood was modeled with the MIREX five mood clusters and the results suggested further refinement of this model. The selected music pieces included six genres and styles of Chinese music, ranging from traditional folk music to several sub-genres of C-pop.
|
An integrative model for understanding team organizational citizenship behavior: Its antecedents and consequences for educational teams
|
[
"OCB",
"Educational teams",
"Team innovation"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: Today schools operate in dynamic environment, each struggling to gain a competitive edge (Orr and Orphanos, 2011). This environment reinforces the understanding that schools should strive to employ teachers who are willing to go the extra mile, namely to engage in organizational citizenship behavior (OCB). OCBs are behaviors and actions that go beyond the formal role and what is laid down in the teacher's job description, and contribute to the achievement of the school's objectives (Oplatka, 2006). OCB in schools is commonly taken to be an individual phenomenon, namely a personality trait or a social response to the behavior or attitude of superiors or to other motivation-based mechanisms in the workplace (Zeinabadi, 2010). This individual approach seems wanting: teachers perform or fail to perform OCBs not in a vacuum but in an organizational context, which most probably serves to encourage or discourage them (Somech and Drach-Zahavy, 2004). Only in the last two decades have some scholars studied OCB as a team or organizational-level concept (e.g. Ehrhart, 2004). Put otherwise, the individual approach probes the possibility that teacher OCB can be better understood as a team or an organizational feature that thrives in a context. This development is important because the aggregate OCB level, not sporadic actions by some individuals, influences organizational effectiveness (Organ, 1988). Moreover, the several studies that have captured OCB as a collective phenomenon examined either its antecedents (e.g. Hu and Liden, 2011) or its consequences (e.g. Yen et al., 2008). Very few researchers, if any, have explored the mediating role of team OCB in relation to specific antecedents and outcomes. The present research seeks to address this deficiency by examining OCB as a team phenomenon. The first premise of the research model is that it is important to measure OCB as a collective structure. Next, the mediating role of team OCB is investigated. Specifically, the research model posits that the contextual variable of a team's justice climate, and the team's collective psychological state (psychological capital), will be positively related to team OCB. Furthermore, team OCB will be positively related to the outcome of team innovation. The model also suggests that team OCB will mediate the relation between the antecedents and the outcome.
Theoretical background and hypotheses: Teachers' OCB is defined as "[...] teachers voluntarily going out of their way to help their students, colleagues, and others as they engage in the work of teaching and learning" (DiPaola and Hoy, 2005, p. 390). OCBs are essential because formal in-role job descriptions cannot cover the entire array of behaviors needed for achieving school goals (Zeinabadi, 2010). They operate indirectly: they influence schools' social and psychological environment, enhance school effectiveness by freeing up resources for more productive purposes, help coordinate activities within the organization, and enable teachers to adapt more effectively to environmental changes (Sesen and Basim, 2012). Teachers manifest OCB, for instance, by staying after school hours to help a student with learning materials, aid a colleague with a heavy workload, volunteer for unpaid tasks, or make innovative suggestions to improve the school (Somech and Oplatka, 2014).
Method: Sample and procedure
Results: Table I shows the means, standard deviations and correlations for the study variables.
Discussion: The present study explored team OCB from a context perspective. This course is important because to date most studies on OCB in schools have focused on teacher OCB as an individual feature, without regard for its contextual nature (Somech and Oplatka, 2014). Put simply, although OCB is performed by individuals, willingness to engage in OCB is not generated in a vacuum, and the team context most likely serves to encourage or discourage people in respect of exerting these extra-role behaviors (Vigoda-Gadot et al., 2007). In this regard, our findings highlighted the importance of team-level antecedents and outcomes for better understanding how between team differences in OCB are developed and what their consequences are. The results contribute to the educational administration literature in several respects.
|
Decision '08: event marketing or product sampling?
|
[
"Sampling methods",
"Marketing strategy",
"Product trials",
"Target markets",
"Direct marketing"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: Recent trends in the evolution of marketing have delivered return on investment (ROI)-driven brand managers to an important crossroads. Should they choose event marketing or consumer sampling? Can it ever be both? If so, under what circumstances?
If sampling results are strongest when the trial opportunity is the greatest for the product category, why are so many brands risking the positive ROI of a targeted, direct-to-consumer sampling program, to invest in event marketing programs?: Most marketers believe they need to do something more to win over today's young adults - the target of most brand samples. Many marketers developing promotion plans hope to make a mark on the business by creating "cool" newsworthy promotions. Some brands choose to sample at events in the hope that interest in the event will deliver additional brand loyalty. Some marketers believe that integrating all elements of the marketing plan will deliver optimal results. This leads them to believe that their sampling results will be strongest in a program that has tied various promotional activities together.
Is the added expense of an event necessary?: More importantly, does it provide a better ROI? Here are some instructive case studies.Situation no. 1
Summary: Even when a targeted, direct-to-consumer sampling program has higher distribution costs, due to using a highly targeted list or having to mail the sample, it is unlikely that an event-sampling program could return a better ROI. That's because sample controls are not as good and trial numbers are likely to be lower, resulting in lower purchase numbers. It is also highly unlikely that any other benefit derived from an event could increase ROI enough to cover the substantial trial and purchase differences.What steps should marketers take to achieve the highest ROI behind product sampling?
|
Strategic megabrand management: does global uncertainty affect brands? A post-9/11 US/non-US comparison of the 100 biggest brands
|
[
"Terrorism",
"Brands",
"Uncertainty management"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.
Introduction: from brand management to megabrand strategies: The objective of this research article is to shed light on the evolution of brand management into a crucial strategic tool for international business operations. On basis of the literature available in this field, we analyze the largest 100 brands (hereafter categorized as megabrands) in terms of ranking and value modifications in the 2001 to 2005 period, a mature globalization period, with the first ranking referring to pre-09/11 findings. The sample and its analysis provide us with significant findings that open crucial questions about US/non-US brand strategy and perceptions, and the future application of global megabrand policies. We then shed light on the causal factors that global terrorism may contain and tentatively propose brand strategy solutions, but do not exclude other causal factors or co-factors that will need further inquiry. Overall, our hypothesis is that brands serve to bring security[1]. Accordingly, if the source of that brand is less secure, then it will be less effective as a brand. This hypothesis needs to be qualified: In particular, would one not expect that the short run reaction to insecurity is to be more, rather than less brand loyalty? The findings of this study have a strong indication that this assumption can be reversed, and we indicate that indirect impacts of global terrorism might be the reason. Further, it is important to show that negative security shifts in the US have been greater than a general increase in malaise in the global markets where the brands are sold. Again, the data indicates that movements in brand value and ranking appear to respond to more than such a malaise.Being a simple "identification tool" at its very start, brand names have become a critical part of a company's strategy. Academic research has shown that one major historic reason for brand success is the diminished risk perceived by the consumer (Roselius, 1971; Kapferer, 1991; Keller, 1998; Riezebos, 2003). McCarthy (1971) highlights the three primary roles of brand:* identification and purchase simplification function;* brand has a projective, symbolic and imaginary function and provides the consumer with a status; and* brand guarantees quality, protection and risk reduction for the consumer by pointing out to its source.For these reasons, companies are willing to consider brands as an important asset of their balance sheets or to invest huge amounts of capital to buy them (Laforet and Saunders, 1994).The power of brands is founded on consumers' aversion to uncertainty. For a long time, consumers made their food buying decisions based only on a product's visual aspect, ignoring its brand name, accepting instead the grocery store owner's opinion as selection criteria (Boyer, 2002). Later on producers introduced clearly visible signals that identified their products and consumers then got used to preferring the signal as opposed to the product visual characteristics (Keller, 1998); that is, brand became more important than the product itself (Riezebos, 2003).Even at the present, perceived risk reduction is the first reason consumers have for choosing a brand and this guides brand management evolution (Kapferer, 2003). When consumers perceive a risk in making a buying decision, they will deploy different strategies for reducing it. Five major risks are considered by consumers:1. Financial risk ("making a bad deal", which increases the importance of the brand compared with the unit price of the product).2. Physical risk (being harmed by the product, especially food products).3. Technological risk (being disappointed by the product performance, it is the risk of functionality).4. Psychological risk (feeling guilty or irresponsible for temptation, especially in impulsive decisions - or associating harm or risk to the brand, either associated to fear or sadness).5. Social risk (what pairs will say or think about choices. Therefore brand is a sign of possession for a community, but also a sign of adherence, of patriotism or of association to or away from particular social issues.)Risk reduction function directly related to the brand has been increased by the macro-economic context, especially after 09/11 because a fragile and complex environment is expected to increase the role the brand has to play in reassuring the consumers' buying decisions. Even though, we later argue that the capacity of brands to link producers and consumers has been rudely challenged. There have been drastic changes of consumption habits in some markets, such as the acceleration of the coming on-stream of the hard-discounters in Europe, with a new approach to the quality - price relationship and the weakening of the brand, of low-cost airlines, and of non-brand textiles from low-cost production. Companies have reacted to these new challenges. This new environment has notably changed the way in which big international companies conceive of their brands. Brand guarantee and its image are shelter points for consumers: normally, the higher the risk the more helpful the brand. Consequently, brands have learned a different way of communication (e.g. emphasizing safety themes, as carmakers do already), to change their relationship with the environment or towards the Third World (e.g., Nike reconsidering its production policy in order to improve its brand image) but also with globalization (e.g. being more respectful of local brands, as Nestle is). Brands also start working on ethical matters (The Body Shop's cosmetics products), fair trade (Malongo coffee) or social responsibility. But one of the major facets of this adaptation of brands and firms to the new situation is the coming on-stream and acceleration of megabrands within companies.
Megabrands?: Traditionally, choosing brand strategies is the focal point for companies, whether they are multinational groups or local companies (Schuiling and Kapferer, 2004). Supposing that a firm has different sources of competition, one of the strategic issues is whether it uses one or several brands. Strebinger (2002) states that one of the most critical problems in branding relates to the management of a mono or multi-brand system while Riezebos (2003) questions whether it is feasible having just a single-brand strategy in the company, with a prime focus on one brand and then developing additional brands from it.The historical development of branding includes some deeply contradictory factors, as shown in Figure 1.This figure visualizes and conceptualizes a company's willingness and need to have numerous products able to meet the different customers' demands as appropriate as possible, to assure their expansion and international development, that is, to counteract all risk of being a single-brand company. Likewise, there is a need for limiting the number of brands because of a second risk: that of a brand overexposure or over usage, including the financial risk of dispersing the investment.The first risk leads companies wishing to develop to buy or launch more brands in order to enter markets, segments or customers inaccessible with only one brand. This may be an "inflationist" process in terms of markets as it leads to create many brands.The second risk takes the same companies in the opposite direction, trying to limit the number of brands in order to maximize investment per brand, thereby making the brands stronger and covering more territory.But this process is intrinsically schizophrenic and raises the question of the strategic equilibrium of branding (Riezebos, 2003). Strategic choices may become brand choices, choices of brand organization or choices about the kind of relationship between brands that a company wants to maintain. One of the purposes of these choices is to maximize the equity of its different brands.As a way to escape from this process, many companies turn to megabranding.At its origins, the evolution of the brand universe towards megabrands comes from big corporations that discovered, in the early 80s, that they could create value by capitalizing on the transnational concepts carried in supranational brands so as to attain maximum return on investment (Kapferer, 2000). This new strategy reduced internal brand management costs and the costs of launching new innovative products. This simple idea has allowed many companies to focus on the strongest brands, or on brands with high growth potential or on highly internationalized brands, and to abandon or minimize all others. Indeed, at the beginning, economic reasons were the main inspiration for this rationalization process: first of all trying to concentrate all human and economic resources on a few brands and, especially, cutting advertising costs related to the launching and maintenance process of multiple brands.The megabrand concept, thus, is a core concern for most leading transnational firms because, as the competitive environment becomes more and more complex, and with a high level of risks of every nature, companies focus on megabrand strategy and attempt to assure their expansion and international development.In the early 1990s many companies did inform the market of their intentions to reduce their brand numbers: The most extreme case being that of Unilever, which planned to reduce from 1,600 down to 400 brands in the 2000-2004 period. Anthony Simon[2], President of Unilever-BestFoods marketing, underlined that "Unilever's objective is to reduce the number of brands in order to make them stronger. Four strategies support this decision: category, segment, channel and geography".In a megabrand strategy, a brand name may be used for horizontal extensions (inside the same price layer, common for mass consumption products) or vertical extensions (in different price layers, common for durable goods). This strategy can be very successful; a well-developed brand can provide a sustainable competitive advantage. To ensure continuous success, the operation of a megabrand strategy demands permanent innovation, strong R&D investment, a communicational style hard to imitate and a brand image not based on the product but on associations and perceptions.Megabrand management changes the focus of marketing to a superior, strategic decision-making level (Baldinger, 1990; Trinquecoste, 1999), as it implicitly involves focusing on the whole company instead of on individual brands (Riezebos, 2003). Both, Juga (1999) and Reynaud (2001) show that by displacing competition to this superior level, competitive advantages become harder to understand (less tangible) and to imitate.The increasing recognition of brands as a source of sustainable competitive advantage stresses the importance of conceptual models about organizational brand strategies (Louro and Cunha, 2001). Therefore, our research goal is to explore the megabranding field and to evaluate its strategic dimension as a new and more complex and durable source of competitive advantage in times of international adversity and the challenges of 09/11-type terrorism.
Research methodology: We have chosen to analyze the evolution of the value of megabrands over a five year period. The sample consists of those brands ranked in "The 100 best global brands" annually by Interbrand corporation for Business Week magazine.Interbrand defined seven criteria (see Appendix) which evaluate brands much in the way analysts value other assets, i.e. on the basis of how much they are likely to earn in the future.To qualify for the list each brand must:* have a value greater than $1 billion;* derive about a third of its earnings outside its home country; and* have publicly available marketing and financial data.For these reasons Interbrand specifies that such heavyweights as Visa, Wal-Mart, Mars or CNN are eliminated from the rankings. Only brands are taken in account (and not parent companies such as Procter and Gamble), and airlines are not ranked because it is too hard to separate their brand impact on sales from factors such as routes and schedules.Despite its limits, this ranking provides a global vision of the value of the main megabrands. This ranking has gained importance over the past years as a main reference for brand strategy. In addition, the assessment and evaluation method has not changed over the past five years. The rankings we refer to were published at the following dates: 6 August 2001/5 August 2002/4 August 2003/22 July 2004/21 July 2005.We present these five rankings in the Appendix. The first ranking refers to the period prior to the 09/11 events. We have, at the same time, conducted in-depth research into the question whether other factors may be responsible for the results we have found. Charts that summarize these findings are also presented in the Appendix, and - while it is certainly impossible to be exhaustive- exclude any major movements, evolutions, malaises or crises that could have the effects found (covering empirical research into size-trend and relative to industry, profitability-trend and relative to others in the industry, industry stage of life cycle, leverage-how vulnerable they are they to taking risks, country of origin to observe movements including characteristics such as access to capital, human resources, competition, and an index of insecurity, movements in scopes of megabrands (global reach, horizontal and vertical branding), or change in type of customers (e.g., services, package goods, durables, business); data chosen for illustration only covers main developments). By including such variables in the analysis, we strive to make it possible to determine that the risk hypothesis can be supported after controlling for other factors that lead to success, identifying other general factors that would lead to shift in megabrand positioning over time. No unexpected development in such has been found.
Research results: The top 100 brands
Implications to brand marketing: Our initial assumption for this research was that international corporations adapted their brand marketing to globalization. We began by reviewing megabrand strategies that were put into effect over three decades, an option chosen by a wide range of companies to secure global, relatively easy and cost-efficient management of brands. We then raised the question of how megabrands evolved over the five years from 2000, with an objective to study the validity of this strategy through the analysis of the value evolution in the ensemble of megabrands worldwideThe data analysis provides strong empirical findings and raises an important set of questions: The value of US top brands worldwide declined significantly after 2001, and over the past rankings of world megabrands, while non-US brands experienced significant expansion over the same period. This evolution is confirmed on all three levels of analysis that we developed: total of 100 leading brands, total of the 20 leading brands, and comparison between the leading ten US and non-US brands.Why is the value gap more significant in the top 20 brands than in the top ten ones? Are second tier brands of this sample more vulnerable, and if so, why? The further we decrease the ranks of top brands in the top 100, the bigger the gap becomes between US and non-US brands, and this to the benefit of non-US ones. Are business cycle trends responsible for this trend? Are these brands are particularly symbolic in terms of nationality and risk perception since 2001 and global terrorism? Will consumers feel uncomfortable with certain brands since 9/11, and if so, what indications could allow us to understand this phenomenon? Is the dot.com burst responsible for this?With these questions in mind, further analysis provides the following indications: The following tables, one with the top 20 evolutions and one with the bottom 20 worst evolutions of brands, only refer to brands for which data is available for all five years.The findings indicate that in the top 20 "best evolution" international mega-brands only eight brands are US American and 12 are non-US, that the two best performers by value are non US (Samsung and Louis Vuitton), that Pepsi Co is the top US brand (interestingly it is the leading one in the US and in terms of brand name the competitor Coca-Cola is part of the bottom 20 brands, though still best known); that 2nd and 3rd best American mega-brands are Dell and Apple brands, and those who one could consider best known as Microsoft and Oracle are in the bottom 20. Does this mean that demand remained constant but strong US image made them fall? Due to the diversity of products and sectors represented, we believe that the dot.com bubble, highly sector-dependent, cannot be causal or solely causal to the megabrand evolutions that we note.Also, currency fluctuations in that time period would rather imply opposite effects.If thus the evolution of brands value over this five year period, of megabrands, is linked to brand nationality, and in this case that of US or non-US origins, this would imply that corporations need to invest in megabrands emanating from different regions. If one considers that US brands may be more sensitive to risk perceptions from global terrorism, by the consumer, than non-US brands, and that this terrorism could be a causal factor, because the data modifies after 2001, then the managerial objective is immunity to the consequences of such events. Given the crucial significance of such cause to strategy, we provide some basis for understanding and potentially resilience.
The perception of threat from 09/11 terrorism as causal factor?: Alexander et al. (1979, p. 4) define terrorism as "the systematic threat or use of violence to attain a political goal or communicate a political message through fear, coercion, or intimidation of particular persons or the general public". We can assume that the citizen and consumer in this general public is, therefore, exposed to stress scenarios that differs from typical scenarios, and therefore alter his or her purchasing behavior.It is widely admitted that with 9/11/2001, terrorism has become more global (Schneckener, 2002). 9/11-type terrorism is characterized by a proximity to western civilization and its psychological impact is reinforced through wide spread media coverage. Contemporary terrorist activities share a number of common features which are inter-related and of a recently resurrected nature: These features include the increasing link of terrorist activity to a quasi-legitimization on basis of allegedly religious motivation, modern business-like leadership structures, asymmetric warfare, and the use of the victim mostly as part of a communication strategy. The objectives of terrorists are to convey a triple message:1. Government is not capable of guaranteeing security of a society or citizen, nor of service or product safety.2. Corporations, investors and travelers are safe nowhere, and that symbols of a country, culture and society that they convey are potential targets for any type of attack.3. Any measure taken against terrorism is insufficient by nature.These messages have a powerful impact on many. Psychological effects (defined above as any of the extremes, from feeling guilt or irresponsible for temptation, especially in impulsive decisions to associating harm or risk to the brand, either associated to fear or sadness) instill uncertainty into the economy, and have been found elsewhere to significantly affect the economic, organizational and governmental environment (Suder, 2004). Given this, we adduce that consumer behavior and corporate strategy may be affected. For instance, just as in times of war, the consumer may adopt a "stocking/storing" behavior for particular types of food and medicines if he/she perceives a terror-based threat.Therefore, we hypothesized elsewhere that a firm's performance under uncertainty and risk of terrorism will be a function of its ability to reduce its vulnerability to terrorist acts through risk analysis and assessment, through shortened supply lines, and a decreased need for economic redundancy (Suder, 2006). This is even more so in the case of 9/11-type terrorism; a terrorism that has globalized and that hits the global activities of firms in addition to those at the location of a strike. In this section, we therefore focus on the question whether top management of megabrands should take into account a corporation's vulnerability to terrorist threat felt by consumers. If a brand has national symbolism - like Coca Cola -then its goods or services are exposed to threat or acts of terrorism. Will the consumer turn away from the brand, or in fact rather increase its faith in it? Our study could be interpreted to show possible link on a quantitative basis by comparative approach. In this case, is a megabrand strategy still a reasonable option?To be deemed reliable, enterprises must be able to keep their brands resilient in the event of a catastrophe. The US airlines that were victims to the hijacking of its planes crashed into the WTC in New York are the first illustration of the psychological impacts of brands that are related to terrorism threat. The symbolic relation to the events though entirely involuntary had dramatic consequences for bith American and United airlines. Also, a tendency of clients to rather fly shorter distances, on separate flights and with non-nationally related aircraft such as low-cost airlines emerged since 9/11 (MacBain, 2003, Tourism Queensland, 2006 et.al.).Markets melt down or freeze with great speed in case of threat or terrorism acts, other markets can rise because consider unrelated to the threat (Suder and Czinkota, 2007). Another example is the reluctance of Londoners to use public transport after the double-attacks of summer 2005; the bicycle-market however boomed almost immediately. The terrible human costs of terrorism are clearly unacceptable to any logic or ethics. Given that terrorism has existed in various forms over history, people, companies and industry now need be knowledgeable about 9/11 type threat and its impacts, and to adapt.
International terrorism and brand marketing: a conceptual framework: International terrorism adds an important determinant to the definition of a firm's brand strategy. As an uncontrollable force in its external environment, terrorism events may lead to direct (mainly physical) or indirect (for instance consumer behavior and brand perception) disruptions. In the preliminary phase of threatened violence, or the following phase of the attack's aftermath (for details of this classification, see Suder, 2004), consumer demand for the firm's goods and services may alter but does not always decline (e.g. the demand for security equipment and services increases); any related disruption to the value chain perceivable by the consumer such as supply difficulties of needed inputs, resources, and services; or government policies and laws enacted to deal with terrorism alter the conduct of brand strategy. Macroeconomic phenomena, and shifts in international relations also modify behaviors. Media plays an important role in the intensification of the related psychological effects. For instance, the political differences between some European states and the USA in terms of the conduct of a war against terrorism, in particular concerning the invasion of Iraq, significantly modified consumer behaviors in the USA towards French and German brands (such as Roquefort cheese, Perrier water, ...and even French fries, solely based on their denomination). In those different dimensions, terrorism threat, act and aftermath affect:* ways of life;* perceptions;* consumption habits of millions of people all around the world; and* the company-client relationship.The responsiveness of consumers to a global threat is particularly high because it is intangible, close- by, and may strike anyone anywhere, in an expression of the "flatness" of the world. The incalculable uncertainty becomes a certainty that terror events happen and society and business adapt. The only certainty is that events will always be symbolic, whether that applies to locations, victims or the relation to the "hated" society. In this society anyone and anything can potentially identify with victims to attacks, whether human or object, whether a site, a product or a group. We therefore assume that this is so for brands in their dependence on perceptions and image.For a corporation, brand strategy and the administration of price shifts, communications, distribution strategies, buyers and suppliers, logistics, import and export are directly exposed to cultural issues, image responsibilities, and consequences of actions. For a consumer, brands have the particular capacity to link producers and consumers who trust in a specific set of quality, service and security "guarantees" linked psychologically to a particular brand. Brand marketing is symbolic and related on confidence, quite at the opposite of fear or panic. The consumer will hence turn to (or turn away) from brands in proportion to the strength that the brand relates to the threat, and expose brand strategy to risks non-related to their good performance.
A study of megabrands as risk-savers: A brand is by definition the symbol of an object or a service, as well as a model of the consumption society (Keller, 1998). One major weakness of the megabrand approach is to expose the company to a major risk: A single brand, a single image. Needless to say, if a problem occurs with this brand the whole company's stability is at stake. But consumers are also citizens and so the brand may be a broader social and economic battleground amongst companies with respect to consumers. For example, brands also represent an important political space were virulent political battles can be fought (Semprini, 1992). Some movements embody or oppose lifestyles symbolised by brands and their influence, sometimes in a very radical way, the consumer's society becomes represented by companies and their brands (Klein, 2002). This contesting opposition must be taken into consideration when developing brands and their territories in order to avoid vulnerability of a single-brand strategy and extreme exposure. Various authors already tackled this notion under the theme of brand capital or brand equity (Farquhar, 1989; Baldinger, 1990; Kapferer, 1991; Aaker, 1992; Keller, 1998). For Aaker (1992), brand capital is a unit consisting of the name and symbolic meaning of a brand that can add or decrease the value of a product or service, and that delivers value to the client and to the firm. An appropriate strategy this reinforces the value of brands while an inappropriate strategy diminishes the value. On basis of our findings and the exposed nexus that one may establish with 9/11, we suggest that megabrand strategy allows corporations to obtain critical size (specially facing the distribution channels), face the growth limits of existing brands, share, soften, and pool certain costs (research, industrialization, marketing) although the megabrand building process is time related and based on a variety of experiences. One can hence suggest that, if here lies the causal link, in post-9/11 megabrands allow for better control of risks by the company and increase the value of brands better if locally or regionally embedded. If megabrand strategy overexposes brands as symbolic for a mode of consumption rejected by or associated to terrorism, then megabrand overexposure diminishes the value of brands, by overexposing firms to risks, band devaluation and increasing company vulnerability.
Conclusion: The findings of this research imply that brand strategy is highly dependent on exterior factors and need to be adapted to those if competitive advantages shall not erode and shift considerably. While the causal link to 9/11 terrorism can not be clearly established, it does appear as one of the sensible explanations or co-factors for the dramatic evolution that were found. These findings in themselves hope to make a contribution to the understanding of megabrand strategy in mature globalization. It appears from our research that brand nationality, and thus brand associations to the various effects of terrorism (victimization or identification), may define the behavior of consumers and have an impact on brands value and ranking. For a future that may have to cope with 9/11-type terrorism, megabrands (except for the very strongest ones perhaps) may therefore not qualify as the best option for companies that wish to reduce risk and immunize brands and performances. If this is confirmed, the firms are well-advised to invest in mega-brands anchored into regions, through a transnational rather than a global strategy.Clearly further research into the potential causalities is needed: Whether terrorism, business cycles, currency issues, the bubble effect, all of these events united or none, international business scholars and practitioners are advised to study these links together, in each sector and market so as improve understandings and capabilities to respond appropriately to the evolution of megabrands in ranking and value since 2001.
|
Future employment selection methods: evaluating social networking web sites
|
[
"Selection",
"Recruitment",
"Social networks",
"Internet"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: Within the past few years, the phenomenon of social networking web sites (SNWs) on the internet has exploded into the mainstream. Further, this online information has begun to be used for purposes beyond its intended use. Owing to the vast amount of personal information on these web sites, employers have begun to tap into this information as a source of applicant data in an effort to improve hiring decisions. This study evaluates the use of the SNWs in employment selection. Specifically, can trained judges consistently and accurately assess important organizational characteristics such as personality, intelligence, and performance using only a target's SNW information? In addition, the use of this information may lead to discrimination against applicants, given the wide range of available personal information such as gender, race, age, religion, and disability status otherwise illegal to use when making employment decisions.Social networking web sites
Method: Participants and procedures
Results: The as for the judge ratings of personality were calculated for each of the six ratees. These six as were then averaged for each of the big-five to estimate the overall internal consistency of the scales. In order to assess H1, interrater agreement in the form of average measures intraclass correlation coefficients (ICCs) for the judge ratings are included in Table I. The scaled scores for the big-five personality traits and the single item scores for IQ and performance were evaluated for interrater agreement. The 378 total ratings (63 ratersx6 ratings each) were used to calculate the ICCs. The ICC values were all adequate, ranging from 0.93 for extroversion to 0.99 for conscientiousness and performance. Since ICCs are expected to be higher with a larger number of raters, Table II also includes the number of raters for each characteristic which would be necessary to achieve a 0.50 ICC value. Although there are no guidelines for level of agreement, 0.50 was used in the analyses as it should provide a minimum level of acceptable agreement across judges. The Spearman Brown prophecy formula was used to determine how many raters would be required to obtain an adequate (0.50) ICC value. Based on the 63 raters from this study, it was determined that between two (for conscientiousness and performance) and six (for emotional stability and extroversion) raters would be required to obtain a satisfactory level of interrater agreement.H2 was evaluated by conducting t-tests on score means in order to determine whether or not the means are statistically different from one another. In order to determine which means to test, the true scores (self-reported big-five personality scores, intelligence scores, and GPA) of the six rated subjects were evaluated. For each of the seven characteristics, the individual with the highest true score and the individual with the lowest true score were selected for analysis. Judge mean ratings for these subjects were then compared to determine whether or not raters are able to distinguish individuals high on a characteristic from those low on the same characteristic. This method also allows for evaluation of the direction of the relationship, such that (in addition to evaluating mean differences) the judge rating of the subject with the higher true score should be higher than the judge rating of the lower true score. Results demonstrate that the mean judge ratings for the subject highest on the seven characteristics were statistically different from the subjects lowest on those characteristics. In addition, with the exception of openness to experience, the judges mean ratings were higher for those with the highest true score, indicating the ability of judges to distinguish the traits of conscientiousness, emotional stability, agreeableness, extroversion, intelligence, and performance by evaluating SNWs[6].Post hoc analyses were conducted to determine the impact of intelligence and personality on judge consistency and accuracy. Prior research has demonstrated mixed findings related to the impact of personality traits of the rater on rating accuracy. Ambady et al. (1995) found that less sociable (extroverted) raters were more accurate, while Lippa and Dietz (2000) found that only openness indicated more accurate raters. In addition, narcissistic raters have been found to be less accurate (John and Robbins, 1994), which may be relevant to the big-five since narcissism relates strongly to neuroticism. Finally, intelligence has also been reported to positively relate to rater accuracy (Lippa and Dietz, 2000). In the current study, the 63 judges were asked to take the same intelligence and personality tests as the SNW subjects. The analyses conducted above were then re-evaluated based on high versus low groups based on intelligence and the big-five. Results show no difference in interrater agreement based on these characteristics. However, judges who are more intelligent and more emotionally stable were shown to be more accurate in their judgments. More specifically, when the raters were split into high and low groups based on intelligence scores (the 31 highest scores versus the 31 lowest scores), the high intelligence group significantly and more accurately differentiated between high and low characteristics for conscientiousness, emotional stability, openness, and performance. For example, with all 63 raters combined, the difference between rater means for conscientiousness in Table II is 0.38 (8.03 for the high ratee score and 7.65 for the low ratee score). When assessing high and low intelligence raters independently, the mean difference for the 31 high intelligence raters is 0.61, but only 0.14 for the 31 low intelligence raters. Thus, more intelligent raters seem to be more capable of assessing this trait than less intelligent raters. Similarly, raters who are the most emotionally stable also rate more accurately for conscientiousness, emotional stability, openness, and performance. For example, the mean difference across raters for high and low ratee conscientiousness is again 0.38, but is 0.73 for the 31 raters who are the most emotionally stable and 0.03 for the 31 raters who are the least emotionally stable. These results indicate the potential need for researchers to consider intelligence and emotional stability when selecting individuals who will serve as raters of characteristics such as personality.
Discussion: Based on the large volume of personal information available on SNWs, judges' ratings of the big-five dimensions of personality, intelligence, and global performance were consistent across the 63 raters in this study, demonstrating adequate internal consistency reliability and interrater agreement. In addition, the trained raters were able to accurately distinguish between individuals who scored high and individuals who scored low on four of the big-five personality traits, intelligence, and performance, providing initial evidence that raters can accurately determine these organizationally relevant traits by viewing SNW information.As stated earlier, other rated personality has been shown to predict job performance. Considering that other methods of other-reported personality are unlikely to be viable in an employment selection context, SNW ratings of personality may be a practical approach. Owing to the theoretical and methodological differences between self-reported and other-rated personality, it is likely that ratings of personality via SNWs will provide a context for incremental prediction of job performance beyond the predominant self-report approach. In addition, the differences in context between SNWs and a job interview (i.e. socially desirable responding in the job interview as well as the unique nature of information contained in SNWs) should similarly allow for unique prediction of job performance beyond what can be evaluated through personality assessment in the employment interview. This approach may be particularly valuable since these assessments take only a fraction of the time involved with other selection methods.This study is not without limitations. Although the analyses testing the consistency of the relationships of SNW ratings are based on 378 judge ratings from 63 raters, the analyses testing rater accuracy were conducted by testing for significant differences between the high and low performer on the seven characteristics for only six subjects. Future research should assess accuracy over a larger sample of subjects.We hope that the results of this preliminary study will not be used by organizations to support their use of SNWs in employment selection. Without further validation in a variety of studies, with larger samples and in a variety of organizational contexts, caution should be used when interpreting the implications of this study. This is particularly true given the potential for employer legal liability due to the vast amount of personal information available on SNWs. Information regarding gender, race, age, disabilities, and other criteria which should not be used when making hiring decisions will most certainly, consciously or not, influence who gets hired. Even if this information does not bias the hiring decision, disparate impact issues may still exist. Future research should also examine the potential issues of adverse impact and potentially illegal information in hiring decisions using personal information from SNWs. In addition, research should be conducted to compare assessments of SNWs to other employment selection methods, such as personality assessment, intelligence testing, and employment interviews.Based on the relative absence of research evidence in this newly developing area, particularly regarding the potential for adverse impact and the lack of validity evidence, we believe the most important practical implication of this paper is for organizations to use SNWs with these issues in mind. Organizational representatives assessing SNWs should ask themselves two important questions. First, is the organization assessing (or could be perceived as assessing) information which could lead to discrimination against a legally protected group? Second, is the specific social networking information used to help make a hiring decision valid in determining who will perform better on the job? The approach used in this paper of assessing personality traits, intelligence, or general performance begin to provide answers to these questions.
|
What factors influence firm perceptions of labour market constraints to growth in the MENA region?
|
[
"MENA region",
"Labour regulations",
"Labour skill shortages",
"Labour market constraints",
"Bivariate probit model"
] |
Summarize the following paper into structured abstract.
1. Introduction: Stringent labour market constraints are expected to pose serious obstacles to firm performance and economic growth. A wide range of literature finds that rigid labour regulations would induce lower labour force participation and higher unemployment rates (e.g. Botero et al., 2004; Besley and Burgess, 2004; Amin, 2009; Djankov and Ramalho, 2009), and would prevent labour markets from being efficient leading to losses in productivity (e.g. Kaplan, 2009). Another strand of literature inspects the problem of labour skill shortages or "skill deficits", which can be defined as the divergence between the educational attainments of workers and the skill requirements of jobs (Kiker et al., 1997). This literature regularly indicates that accentuated labour skill shortages impose significant restrictions on employment creation and economic growth (e.g. Pissarides and Veganzones-Varoudakis, 2007; Bhattacharya and Wolde, 2012), and could eventually inflict severe impacts on economic performance and labour market outcomes (e.g. Allen and van der Velden, 2001).
2. Review of related literature: 2.1. Labour regulations
3. Some considerations about data: The empirical analysis is carried out for the perceived levels of labour market constraints as reported by the respondents (e.g. senior managers, business owners) through the World Bank's Enterprise Surveys database. Pierre and Scarpetta (2004, 2006) examine the relationship between the perceived and actual stringency of labour regulations using national labour protection indices (i.e. de jure labour laws). They find that the reported perceptions are closely related to the actual levels of labour regulations' constraints. Specifically, countries with higher national indices on the stringency of labour regulations are associated with higher proportions of firms perceiving labour regulations as being significant constraints.
4. Data description and variables: We use a data set sourced from the World Bank's Enterprise Surveys database. This database represents a comprehensive source of firm-level data in emerging and developing economies. It covers firms operating in the manufacturing, service, and other sectors. It contains information on various aspects of the business environment such as, access to finance, corruption, workforce characteristics, innovation and technology, and trade. It should be noted that one of many advantages of using data from these surveys is that the questions are identical through firms across all countries. The basic data set used in this paper covers 5,052 firms located in eight developing Arab countries of the MENA region: Algeria, Egypt, Jordan, Lebanon, Morocco, Oman, Syria, and Yemen[5].
5. Empirical specification: Consider a given firm j (j=1, ..., J) belonging to sector k (k=1, ..., K) and located in country c (c=1, ..., C). Firm perception levels of constraints related to labour regulations and those related to labour skill shortages are depicted through the latent variables (Equation 1) and (Equation 2), respectively. These latent variables are not observed. However, we observe the perceptions of firms through dichotomous responses on whether labour regulations and labour skill shortages do or do not pose major/severe obstacles on firm operations and development. Let R
6. Benchmark empirical results: Table III presents the marginal effects from the benchmark bivariate probit estimation carried out for the pooled data set covering existing firms' perceptions of labour market constraints. The Wald test rejects the null hypothesis of zero correlation between the errors in the two labour market constraints' equations and, hence, it indicates that the model should be estimated through the bivariate probit estimator rather than through the univariate probit estimator. The estimated coefficient of correlation between the errors in the two equations is positive and statistically significant at the 1 per cent level. Table III displays the unconditional marginal effects for Pr(R
7. Empirical results by sector and by country: 7.1. Empirical results by sector
8. Conclusion: Labour market constraints are often identified as main business obstacles facing firm operation and development in the MENA region. Therefore, they are naturally listed through the primary items on the labour policy agenda of the MENA countries. The comprehension of the factors influencing the perceived severity of labour market constraints is essential in the design of policies aiming at improving labour market conditions and enhancing business environments. This paper examines the implications of firm characteristics, national locations, and sectoral associations for the perceptions of firms located in the MENA region concerning two primary labour market constraints: labour regulations and labour skill shortages. The empirical analysis is carried out using firm-level data set sourced from the World Bank's Enterprise Surveys database. A bivariate probit estimator is used to account for potential correlations between the errors in the labour regulations' equation and labour skill shortages' equation. The empirical results are generated through overall estimations and by implementing comparative cross-country and cross-sector analyses.
|
Shared brands and sustainable competitive advantage in the Brazilian wine sector
|
[
"Marketing strategy",
"Qualitative research",
"Competitive strategy",
"Brands",
"Interviews",
"Geographical indications",
"Sustainable competitive advantage",
"Shared brands",
"Collective brands",
"Sector brands"
] |
Summarize the following paper into structured abstract.
1. Introduction: A brand can be seen as a strategic asset that helps a company to be more competitive. In the same way that companies invest in brands, countries can also be seen as such (Anholt, 2005; Huang and Tsai, 2013; Kotler et al., 2006). In considering the role that the image of a country can play in a buyer's behavior, constructs such as the country's brand, the country's image and the country of origin may be attributes that offer the potential for companies to achieve a sustainable competitive advantage (SCA), both in the internal and external market (Baker and Ballington, 2002; Hakala et al., 2013).
2. Literature review: 2.1 Sustainable competitive advantage
3. Methodology: This study was based on a qualitative approach that, for Bauer and Gaskell (2000), aims to understand in detail beliefs, attitudes, values and motivations regarding the behavior of people in specific social contexts (Malhotra, 2006). The research was exploratory and the field study was conducted with in-depth interviews (Bardin, 2011; Cooper and Schindler, 2014; Sampieri et al., 2010).
4. Results: 4.1 C1 - valuable
5. Conclusions: It can be concluded that the proposition that shared brands - GI, collective brands and sector brands - provide SCA, according to the VRIA, can be confirmed, thus fulfilling the four conditions that form the acronym VRIA - valuable, rare, imperfectly imitable/replaceable and association. The value added to the product through the information contained in the shared brands facilitates the establishment of a relationship of trust between producer and consumer, being a source of competitive advantage.
|
Ensuring good governance in Singapore: Is this experience transferable to other Asian countries?
|
[
"Singapore",
"Governance",
"Government policy",
"Corruption",
"Good governance",
"Government effectiveness",
"Policy context",
"Public policy"
] |
Summarize the following paper into structured abstract.
Introduction: The People's Action Party (PAP) won the May 30, 1959 general election and assumed office and attained self-government from Britain on 3 June 1959. Singapore was a different place then because it was a poor third world country, afflicted with a serious housing shortage (half the population was living in squatter huts), an unemployment rate of 14 per cent, political instability, labour unrest, corruption, and a high crime rate. However, today, after 53 years under the PAP government, Singapore has been transformed into a first world country, which is no longer afflicted by the problems it faced in 1959.This article contends that Singapore's ability to solve the problems it encountered after attaining self-government can be attributed to the effectiveness of the various policies introduced by the PAP government since 1959. The PAP government created the Housing and Development Board (HDB) in February 1960 to tackle the housing shortage and the Economic Development Board (EDB) in August 1961 to create jobs by attracting foreign investment to Singapore. These two statutory boards were formed to reduce the workload of the Singapore Civil Service (SCS), which was not equipped to solve the housing shortage or create jobs. Apart from being handicapped by rigid regulations and inflexibility, the civil servants also had a "colonial mentality" and were not attuned to the problems facing Singapore.Accordingly, the PAP government initiated these four policies to solve the country's problems:1. Reorganization and attitudinal reform of the SCS.2. Enactment of the Prevention of Corruption Act (POCA) in June 1960 to empower the Corrupt Practices Investigation Bureau (CPIB) (2011) to curb corruption effectively.3. Maintaining the tradition of meritocracy introduced by the British colonial government by retaining and decentralizing the Public Service Commission (PSC) to enhance its effectiveness.4. Paying competitive salaries to ministers and senior civil servants from 1972 and benchmarking these salaries from 1995 to private sector salaries to attract the "best and brightest" citizens to the SCS and government and to minimize the brain drain to the private sector.These four policies will be analyzed in turn below before providing the governance indicators for Singapore and assessing the transferability of Singapore's experience in promoting good governance to other Asian countries.
Comprehensive reform of the SCS: The PAP government realized on assuming office in June 1959 that it had to transform the colonial bureaucracy it inherited in order to ensure the effective implementation of its socio-economic development programmes. During the 137 years of British colonial rule1, the SCS focused on performing the traditional "housekeeping" functions of maintaining law and order, building public works and collecting taxes. Consequently, the SCS did not play an important role in national development and did not introduce administrative reforms until after the Second World War (Quah, 1996a, pp. 62-3).Its victory in the May 1959 general election gave the PAP government the mandate to introduce comprehensive reforms to the SCS and to change the civil servants' "colonial mentality" by making them more sensitive to the needs of the population. Accordingly, it began by reorganising the SCS into nine ministries including the new ministries of culture and national development, which were created to deal with nation-building and economic development respectively (Seah, 1971, pp. 82-3). The workload of the SCS was reduced by making the HDB and EDB responsible for providing low-cost public housing and attracting foreign investment to Singapore. Unlike the SCS, which could not implement the public housing and industrialisation programmes swiftly because of its regulations and red tape, the HDB and EDB as statutory boards were better equipped to implement these programmes expeditiously (Quah, 1985, pp. 124-6).In addition to reorganising the SCS, the PAP government realized the necessity of changing the civil servants' attitudes and convincing them to participate in the process of attaining national development goals. The Political Study Centre was opened by Prime Minister Lee Kuan Yew on August 15, 1959. In his opening speech, he hoped that the civil servants would change their "colonial mentality" once they were made aware of the problems facing Singapore. Their responsibility was to ensure the efficiency of the SCS. The purpose of the two-week part-time and non-residential course conducted by the Political Study Centre for senior civil servants was to change their attitudes and make them more aware of the local contextual constraints (Quah, 2010, p. 134).In addition to the Political Study Centre, the PAP government also relied on four additional methods to change the attitudes and behaviour of the civil servants. First, civil servants were encouraged to participate voluntarily in mass civic projects during the weekends to enable them to get better acquainted with the political leaders, and to provide them with an opportunity to engage in manual work. Second, Nanyang University graduates, who were Chinese-educated, were recruited from 1960 for the education service to reduce the predominance of the English-educated in the SCS. Third, civil servants who were found guilty of misconduct were disciplined and the Central Complaints Bureau was formed in 1961 to enable the non-English-educated public to submit complaints against rude or incompetent civil servants. Finally, expatriate civil servants who were due for retirement were encouraged to remain in the SCS if they were competent while their incompetent colleagues were retired prematurely (Quah, 2010, pp. 134-5).In short, the PAP government relied on the reorganisation of the SCS and attitudinal reform to change the "colonial mentality" of the senior civil servants because it needed their support to implement its policies effectively.
Minimizing corruption: Corruption was a way of life in Singapore during the British colonial period because of the British colonial government's lack of political will and the ineffective anti-corruption measures adopted. Corruption was made illegal in Singapore with the enactment of the Penal Code of the Straits Settlements of Malacca, Penang and Singapore in 1871. However, even though police corruption was rampant and confirmed by the 1879 and 1886 Commissions of Inquiry, the British colonial government ignored their findings and delayed the enactment of the Prevention of Corruption Ordinance (POCO) until December 1937 (Quah, 2007, pp. 9-12).The POCO was ineffective because it limited the powers of arrest, search and investigation of police officers as warrants were required before arrests could be made; and the penalty of two years' imprisonment and/or a fine of S$10,000 for corrupt offenders did not deter corrupt behaviour (Quah, 1978, p. 9). Similarly, the Anti-Corruption Branch (ACB) was ineffective because of the prevalence of police corruption. As the ACB was part of the Criminal Investigation Department (CID) of the Singapore Police Force (SPF), it was not surprising that the ACB was ineffective in curbing police corruption. Furthermore, the ACB was inadequately staffed with only 17 personnel and had to compete with other branches in the CID for limited manpower and other resources (Quah, 2007, pp. 14-15).The British colonial government only realized the folly of making the ACB responsible for curbing corruption when it was discovered that three police detectives and some senior police officers were involved in robbing 1,800lbs of opium worth S$400,000 (US$133,333) in October 1951 (Tan, 1999, p. 59). The Opium Hijacking scandal exposed the ACB's weaknesses and its inability to curb police corruption. Consequently, the British colonial government formed the CPIB as an independent agency in October 1952 to replace the ineffective ACB.During their campaign for the May 1959 general election, the PAP leaders demonstrated their commitment to curbing corruption by exposing the acceptance of S$700,000 by the Minister for Education, Chew Swee Kee, from some American donors (Quah, 2010, p. 175). The Labour Front government led by Chief Minister Lim Yew Hock was described as "being corrupted from head to toe" by a retired architect, Lee Kip Lin (Yap et al., 2009, p. 555). The PAP's revelation of the Chew Swee Kee scandal enabled it to win the May 30, 1959 general election by capturing 43 of the 51 seats and obtaining 53.4 per cent of the votes cast.When the PAP leaders assumed office in June 1959, an immediate priority was to ensure a clean government by adopting a zero-tolerance policy toward corruption. Minister Mentor Lee Kuan Yew explained in his memoirs why he and his colleagues were determined to keep Singapore free from corruption from the outset of their administration:We were sickened by the greed, corruption and decadence of many Asian leaders. [...] We had a deep sense of mission to establish a clean and effective government. When we took the oath of office [...] in June 1959, we all wore white shirts and white slacks to symbolize purity and honesty in our personal behaviour and our public life. [...] We made sure from the day we took office in June 1959 that every dollar in revenue would be properly accounted for and would reach the beneficiaries at the grass roots as one dollar, without being siphoned off along the way. So from the very beginning we gave special attention to the areas where discretionary powers had been exploited for personal gain and sharpened the instruments that could prevent, detect or deter such practices (Lee, 2000, pp. 182-4).As corruption was a way of life in Singapore in June 1959, the PAP leaders learnt from the mistakes made by the British colonial government in curbing corruption and demonstrated their commitment by enacting the POCA on June 17, 1960 to replace the ineffective POCO and to strengthen the CPIB by providing it with more legal powers, personnel and funding.The POCA has three important features to rectify the POCO's weaknesses and to enhance the CPIB's legal powers and increase its personnel. First, the penalty for corruption has been increased to imprisonment for five years and/or a fine of S$10,000 to enhance the POCA's deterrent effect. Second, according to section 13, a person found guilty of accepting an illegal gratification has to pay the amount he had taken as a bribe in addition to any other punishment imposed by a court. The third and most important feature of the POCA is that it has given the CPIB more powers and a new lease of life. For example, section 15 gives the CPIB officers powers of arrest and search of arrested persons. Furthermore, the CPIB's director and his senior officers are empowered by section 18 to investigate the bank account, share account or purchase account of any person suspected of committing a corruption offence. Section 24 is perhaps the most important asset for the CPIB in its investigation of corruption offences because "the fact that an accused person is in possession, for which he [or she] cannot satisfactorily account, of pecuniary resources or property disproportionate to his [or her] known sources of income" is evidence that he or she had obtained these pecuniary resources or property "corruptly as an inducement or reward" (Quah, 2010, pp. 176-7).To ensure the POCA's continued effectiveness, the PAP government has introduced whenever necessary, amendments or new legislation to deal with unanticipated problems or to plug legal loopholes. For example, in 1966, the POCA was amended so that a person could be found guilty of corruption without actually receiving the bribe as long as he had shown the intention of doing so (section 9). The POCA was also amended in 1966 so that, according to section 37, Singapore citizens working for their government in embassies and other government agencies abroad would be prosecuted for corrupt offences committed outside Singapore and would be dealt with as if such offences had occurred in Singapore. In 1989, the fine for corrupt offences was increased tenfold from S$10,000 to S$100,000 (US$78,730)2. On March 3, 1989, the Corruption (Confiscation of Benefits) Act 1989 was passed to enable the court to issue a confiscation order against the estate of a deceased defendant (Quah, 2010, pp. 177-8).Unlike the British colonial government, the PAP government has also demonstrated its political will in curbing corruption not only by enhancing the CPIB's legal powers but also by providing the CPIB with more personnel and budget during the past 53 years. Table I shows that the CPIB has grown by 17 times from eight officers in 1959 to 138 officers in 2011. Similarly, as indicated in Table II, the CPIB's budget has increased by nearly 20 times from S$1,024,370 in 1978 to S$34,073,400 in 2011 (Quah, 2010, pp. 179-80; Republic of Singapore, 2011, p. 378).In contrast to the situation during 1952-1959, the CPIB has adopted a "total approach to enforcement" by dealing with both "big and small cases" of corruption in both the public and private sectors, "both giver and receiver of bribes" and "other crimes uncovered in the course of [the] corruption investigation" (Soh, 2008a, pp. 1-2). In addition to its emphasis on investigation and enforcement, the CPIB also focuses on corruption prevention by reviewing the procedures and practices in those government agencies, where corruption has occurred and makes recommendations to remove the "loopholes and vulnerabilities." The CPIB employs this review process to "identify potential problem areas and loopholes" in order to minimize the opportunities for corruption (Soh, 2008b, p. 8).Finally, the CPIB's extensive outreach programme is implemented by its Public Education Group, which conducts prevention and education talks for pre-university students, principals, teachers, newly appointed civil servants, law enforcement agencies like the police and immigration department, and the management and staff of major organisations in key industries (Quah, 2010, p. 181). Table III shows that the number of persons attending the CPIB's prevention and education talks has increased from 2,500 in 2005 to 9,193 in 2010. Similarly, the number of foreign visitors to the CPIB has more than doubled from 1,000 to 2,538 during 2005-2010. The number of visitors from local organisations has risen from 20 in 2005 to 424 in 2008, and visits by students also increased from 150 to 791 from 2005-2009.
Decentralizing the PSC: The British introduced meritocracy to its colonies in Africa and Asia with the creation of the PSC, which is the adapted version of the Civil Service Commission in Britain. The raison d'etre of the PSC is twofold: to insulate the civil service from politics; and to accelerate its localisation by replacing the expatriate officers with qualified local staff (Sinker, 1953, p. 206). With the advent of self-government and the increasing control of the civil service in the hands of the local population, the PSC was established in the British colonies to insulate appointments, promotions and discipline in the civil service from politics. Accordingly, the PSC was created in India in 1926, in Ceylon (Sri Lanka) in 1931, in Pakistan in 1947, in Hong Kong in 1950, in Singapore and Nepal in 1951, in Malaya in 1957, and in Bangladesh in 1972 (Quah, 2009, p. 810).Meritocracy was introduced in Singapore with the establishment of the PSC on January 1, 1951. The PSC's origins can be traced to the White Paper (Command Paper No. 197) entitled Organisation of the Colonial Service issued by the British government in 1946. Command Paper No. 197 stressed that progress toward self-government could only be achieved if the public services of the colonies were adapted to local conditions and staffed to the maximum possible extent by local people. More importantly, it recommended the establishment of PSCs in the colonies to ensure that qualified local candidates would be recruited into the public services (Quah, 2010, p. 72).The PSC in Singapore was formed with these two objectives in mind: to keep politics out of the SCS and to accelerate the latter's pace of localisation (Quah, 1982, p. 50). The PSC's second objective is no longer important today because the localisation of the SCS was completed with the attainment of self-government in Singapore in June 1959. However, its primary aim of keeping politics out of the SCS remains relevant as the aim of the PSC's programme as stated in the national budget is "to meet the staffing requirements of the government in accordance with the merit principle" (Republic of Singapore, 1980, p. 78).The PSC's evolution during the past 60 years can be divided into four stages, as depicted in Table IV. During its first 31 years, the PSC was the central personnel agency responsible for selecting and promoting civil servants on the basis of merit, disciplinary control, and the granting of scholarships and training awards. As the PSC members and selection boards relied mainly on personal interviews to perform their functions, their workload increased tremendously during this stage. Indeed, the number of candidates interviewed by them for appointments and promotions increased by nearly 19 times from 556 candidates in 1951 to 10,430 candidates in 1982 (Public Service Commission, 1954, p. 2; Public Service Commission, 1983, p. 5). Similarly, the number of disciplinary cases completed has also risen from 24 to 169 during 1957 to 1982 (Public Service Commission, 1959, p. 8; Public Service Commission, 1983, p. 18). The number of scholarships and training fellowships awarded has also increased from 23 in 1963 to 847 in 1982 (Public Service Commission, 1964, pp. 9, 19; Public Service Commission, 1983, p. 8). Finally, the heavy workload of Singapore's PSC becomes obvious when its output of interviewing 58,712 candidates during 1964-1967 is more than nine times that of the PSC in Ceylon, which interviewed only 6,485 candidates during the same period (Quah, 1971, p. 140).The Public Service Division (PSD) was formed on January 3, 1983 on the recommendation of the Management Services Department to formulate and review personnel policies in the SCS and to ensure that these policies are implemented consistently in the various ministries. However, as the PSD was responsible for all personnel policy matters concerning appraisal, posting, training, schemes of service, service conditions, and welfare, its creation did not reduce the PSC's heavy workload of interviewing 50,274 candidates for appointments and promotions, completing 1,148 disciplinary cases, and granting 1,543 scholarships and training awards during 1983-1989 (Quah, 2010, p. 83).In March 1990, the Constitution of Singapore was amended to help the PSC cope with its heavy workload by increasing its membership from 11 to 15, including the chairman, and by creating two new sub-commissions namely, the Education Service Commission (ESC) for education and the Police and Civil Defence Services Commission (PCDSC) for the police and civil defence services. This move was designed to reduce the PSC's workload as the ESC would be responsible for appointing and promoting 21,000 teachers, and the PCDSC would take care of the appointment and promotion of 10,000 police, narcotics, prisons, and civil defence officers, thus leaving the PSC to deal with the remaining 34,000 civil servants. However, in spite of the establishment of the ESC and PCDSC, the PSC's workload was not reduced significantly as Table V shows that the PSC interviewed 9,993 candidates (67.4 per cent) for appointment during 1990-1994, in contrast to the 4,254 candidates (28.7 per cent) interviewed by the ESC, and the 573 candidates (3.9 per cent) interviewed by the PCDSC.To enhance the SCS's ability to compete with the private sector for talented personnel, the public personnel management system in Singapore was further decentralized in January 1995 with the establishment of a system of 31 personnel boards, as shown in Table VI. As the creation of the ESC and PCDSC did not alleviate significantly the PSC's workload in appointing candidates to the SCS during 1990-1994, the ESC and PCDSC were dissolved and amalgamated into a single PSC on 1 April 1998 (Public Service Commission, 1999, p. 9).Unlike the ESC and PCDSC, the 31 personnel boards have reduced considerably the PSC's workload from 1995 to 2010. Table VII shows that the PSC considered 1,724 candidates for appointment, 308 candidates for promotion, completed 832 disciplinary cases, and granted 2,305 scholarships and training awards from 1995-2010.Finally, the comparison of the PSC's workload during its second, third and fourth stages of development in Table VIII shows that the decentralisation of its functions, which began in 1990 with the formation of the ESC and PCDSC and ended with the creation of the 31 personnel boards in 1995, has been effective because the PSC's workload in interviewing candidates for appointment and promotion has been drastically reduced from 50,274 candidates during 1983-1989 to 18,463 candidates during 1990-1994, and to 2,032 candidates during 1995-2010. Similarly, the number of disciplinary cases completed by the PSC has declined from 1,148 cases during 1983-1989 to 832 cases during 1995-2010. On the other hand, it is not surprising that the number of scholarships and training awards granted by the PSC has increased from 1,543 during 1983-1989 to 2,305 during 1995-2010 because this constitutes the PSC's major function today.
Paying for the "best and brightest": To balance the budget, a Cabinet Budget Committee on Expenditure recommended in June 1959 the removal of the variable allowances of Divisions I and II civil servants to save S$10 million and prevent a budget deficit of S$14 million (Quah, 2010, p. 103). The government restored the variable allowance in September 1961 with the improvement of the budgetary situation (Seah, 1971, p. 94).In 1968, the Harvey Commission recommended salary increases for five grades in the Division I superscale salaries. However, the government did not implement this recommendation until 1973 for two reasons: the economy could not afford a major salary revision and the private sector was not considered a serious threat in terms of competing for talent as promotion exercises for senior civil servants were conducted frequently to retain talented personnel in the SCS (Lee, 1995, pp. 21-2).However, the improvement of the Singapore economy in the 1970 s resulted in higher salaries in the private sector and aggravated the brain drain of talented civil servants to the private sector. The National Wages Council (NWC) was formed in February 1972 as an advisory body to formulate general guidelines on wage policies, to recommend annual wage adjustments, and to advise on incentive systems for improving efficiency and productivity (Then, 1998, pp. 220-1). The NWC recommended the payment of the Annual Wage Supplement (AWS) or "13th month pay" from 1972 to minimize the gap between salaries in the public and private sectors.The PAP government has relied on increasing the salaries of ministers and senior civil servants in 1973, 1979, 1982, 1989, and 1994 to reduce the growing differential with private sector salaries. On October 21, 1994, a White Paper on Competitive salaries for competent and honest government was presented to parliament to justify the benchmarking of the salaries of ministers and senior civil servants to the average salaries of the top four earners in six private sector professions namely, accounting, banking, engineering, law, local manufacturing companies, and multi-national corporations. The government accepted this recommendation and public sector salaries were benchmarked accordingly from January 1995 with the salaries of the six professions in the private sector (Quah, 2010, pp. 110-11).The 1997 Asian financial crisis and the subsequent slowing down of the Singapore economy resulted in a 2 per cent decrease in Superscale G and a 7 per cent decrease in Staff Grade I salaries and the reduction of the employers' contribution to the Central Provident Fund (CPF) from 20 per cent to 10 per cent for all employees. The purpose of the CPF reduction was to enhance the Singapore's competitiveness by lowering the cost of doing business. In other words, the reduction in the CPF contribution meant an additional decrease in the salaries of the ministers and senior civil servants.When the Singapore economy recovered in 1999 with a growth rate of 5.4 per cent, and the reduction of retrenchments from 29,100 in 1998 to 14,600 in 1999, wages in the private sector began to rise again. Unemployment fell from 4.3 per cent in December 1999 to 3.4 per cent in March 2000. With the tight labour market in Singapore and the improved conditions in the private sector, Deputy Prime Minister Lee Hsien Loong revealed in parliament on 29 June 2000 that eight administrative officers had resigned in 2000. The government increased the performance bonus component in the public sector salaries and broadened the benchmarking of these salaries to the top eight earners in the six private sector professions. Consequently, in June 2000, the variable component of annual salaries was increased from 30 per cent to 40 per cent of the total annual pay of the superscale administrative officers and ministers (Quah, 2010, pp. 113-14). However, public sector salaries were later reduced by a combined total of 30 per cent in November 2001 and July 2003 because of the recession.In December 2007, the PSD announced that the salaries of ministers and senior civil servants would be increased from 4 per cent to 21 per cent from January 2008. Table IX shows the salaries of the President, Prime Minister, Ministers, Permanent Secretaries, superscale civil servants at the entry grade, and Members of Parliament in 2007 and 2008. However, on 24 November 2008, the PSD announced that the salaries of administrative officers, political, judicial and statutory appointment holders would be decreased by 19 per cent in 2009 because of the economic recession. This means that the President's annual salary has been reduced from S$3.87 million to S$3.14 million. Similarly, the Prime Minister's annual salary has been decreased from S$3.76 million to S$3.04 million (PSD 2008, pp. 1-2). Nevertheless, the Prime Minister's annual salary of US$2,183,516 in 2010 made him the best paid political leader in the world (Economist, 2010a, b).The policy of paying competitive public sector salaries has been effective in curbing the brain drain of political leaders to the private sector as none of them have resigned from political office to work in the private sector before their retirement. The attractive remuneration for the permanent secretaries has also been effective in retaining them in the SCS as none of them have left for private sector jobs before their retirement. However, paying competitive public sector salaries has been less effective in preventing Division I officers from leaving the SCS. An analysis of the resignation rate of Division I officers in the SCS from 1971-1984 shows that the salary increases in 1972, 1973, 1979 and 1982 had failed to curb the exodus of senior civil servants to the private sector (Quah, 2010, pp. 93, 119-20).
Governance indicators for Singapore: Have the above policies of changing the attitudes of civil servants, minimizing corruption, maintaining meritocracy in the SCS by decentralizing the PSC, and paying competitive salaries to attract the "best and brightest" citizens to the SCS, resulted in good governance in Singapore? The short answer is "Yes" as will be demonstrated in Singapore's rankings and scores on eight governance indicators.Government effectiveness
Transferability of Singapore's experience: Is Singapore's experience in promoting good governance transferable to other Asian countries? As Singapore's success in ensuring good governance is the combined influence of the political will of the PAP government to solve the problems facing the country for the past 53 years and its favourable policy context, it will be difficult to transfer Singapore's experience in toto to other Asian countries because of the lack of political will and the unfavourable policy contexts in many Asian countries.Apart from good leadership, Singapore's favourable policy context has enabled its political leaders to stretch the constraints imposed by its small size and lack of resources by formulating and implementing effective policies to solve its problems during the past 53 years. As public administration Singapore-style is the product of the local policy context and the policies implemented by the PAP government, it would be difficult to replicate these policies to other Asian countries in view of the significant contextual differences between Singapore and these countries (Quah, 2010, pp. 246, 51).Table XV shows clearly the significant contextual differences between Singapore and the other 25 Asian countries. First, in terms of size, Singapore is the second smallest country after the Macao Special Administrative Region, which has a land area of 29.2 sq. km. At the other extreme, are the larger countries of China and India, which are 13,466 and 4,630 times larger respectively than Singapore. A second important contextual difference is Singapore's population, which is only larger than those of Mongolia's population of 2.7 million, Timor-Leste's population of 1.2 million, Bhutan's population of 0.7 million, Macao's population of 0.5 million, and Brunei's population of 0.4 million. On the other hand, Singapore's population of 5.1 million in 2010 is dwarfed by the huge populations of China, India, and Indonesia.The third contextual difference between Singapore and the other Asian countries is its economic affluence as manifested in its GDP per capita of US$40,920 in 2010, which is the second highest among all the 26 countries listed in Table XV. In contrast, the GDP per capita of these four countries are less than US$1,000 namely, Cambodia (US$760), Bangladesh (US$640), Afghanistan (US$517), and Nepal (US$490).In short, Singapore is a city-state, which is richer and smaller in terms of land area and population for the PAP government to govern than most of the other Asian countries. Singapore's favourable policy context has enabled the PAP government, which has been in power since June 1959, to implement policies effectively, to curb corruption, and to ensure the ease of doing business in Singapore, as demonstrated in Singapore's superior ranking on the World Bank's governance indicator on government effectiveness from 1996-2010, the Doing Business Surveys from 2007-2012, Transparency International's CPI from 1995-2011, and PERC's corruption surveys from 1995-2011.
Conclusion: In his National Day Rally speech on 19 August 1984, then Prime Minister Lee Kuan Yew attributed Singapore's success to the quality of its political leadership:In the end, whatever the system, it is the quality of the men who run it, that is decisive. For they will decide what to make of the society, and how to get the people to give of their best. The Singapore system has worked. It will continue to work if you vote for honest, able and dedicated men, and you give them your best, for the good of all (Lee, 1984, p. 18).More recently, Lee acknowledged the importance to Singapore's development of attracting the "best and brightest" citizens to join the government and the SCS thus:My experience of developments in Asia has led me to conclude that we need good men to have good government. [...] The single most decisive factor that made for Singapore's development was the ability of its ministers and the high quality of the civil servants who supported them. [...] It was Singapore's good fortune that we had, for a small, developing country, a fair share of talent, because our own [talent] had been reinforced by the talented men and women who came here for their education, and stayed on for employment or business opportunities (Lee, 2000, pp. 735-6).In the same vein, Edgar H. Schein (1996, pp. 221-2) has identified the policy of having "the best and brightest" citizens in government as "probably one of Singapore's major strengths" because "they are potentially the most able to invent what the country needs to survive and grow." Furthermore, he has described Singapore as "one of the few models existing in the world of how a society can progress with a government that attempts to maximize intelligence, skill, and honesty."In sum, Singapore's transformation from a poor third world country in 1959 to an affluent and politically stable first world country today is the result of the ability of its political leaders and civil servants to formulate and implement policies to solve the country's problems during the past 53 years. First, the PAP leaders reorganised the SCS and changed the attitudes of the civil servants by convincing them to contribute to the attainment of national development goals. Second, they continued with the tradition of meritocracy introduced by the British by retaining and enhancing the PSC's effectiveness by reducing its heavy workload by decentralizing its functions of appointment and promotion to the ESC and PCDSC in 1990, and the 31 personnel boards in 1995. Third, they learnt from the mistakes made by the British in curbing corruption by enacting the POCA in June 1960 to enhance the CPIB's effectiveness in combating corruption. Fourth, the PAP government's success in promoting economic growth enabled it to compete for talented personnel with the private sector by paying competitive salaries to ministers and senior civil servants from 1972 onwards to prevent them from leaving for private sector jobs.In the final analysis, whether these four elements of Singapore's success - institutional and attitudinal reform of civil servants; zero-tolerance for corruption; meritocracy in appointing and promoting civil servants; and paying competitive salaries to attract the "best and brightest" citizens to join the government and civil service - can be replicated in other Asian countries depends mainly on whether their political leaders and senior civil servants have the political will and are prepared to pay the high economic and political costs of implementing these policies.
|
The role of the marketing function in small and medium sized enterprises
|
[
"Marketing strategy",
"Small to medium‐sized enterprises",
"Competitive advantage",
"United States of America"
] |
Summarize the following paper into structured abstract.
Introduction: A precept of the marketing concept contends that business achieves success by determining and satisfying the needs, wants, and aspirations of target markets. Few would argue that this determination and satisfaction of target market wants and needs is critical for firm success. These concepts, traditionally thought to be part of the marketing function of the firm, have fueled scholars' interest in the role of marketing within the firm (e.g., Becherer et al., 2003; Berthon et al., 2008; Moorman and Rust, 1999; Simpson and Taylor, 2002; Webster, 1981, 1992, 2003; Webster et al., 2003).Scholars have identified significant differences between large and small organizations. Large organizations tend to use a structured framework with a clear hierarchy in decision making. On the other hand, small firms tend to feature processes that begin with and highly involve the entrepreneur or owner. Furthermore, the entrepreneur or owner's personality and style help to shape decision making (Sadler-Smith et al., 2003).The small-firm sector plays a significant role in the world economy. In the United States a recent analysis of employment changes between September 1992 and March 2005 showed that 65 percent of the net new jobs created during that time are attributed to firms with fewer than 500 employees (US Bureau of Labor Statistics, 2005).Research investigating the competitive advantage of small firms has consistently emphasized the importance of marketing, strategic positioning, and entrepreneurship as key factors in business survival and growth. The ability to identify and operate in a particular market niche enables the firm to exploit a range of specializations and offers protection from larger competitors. Yet despite the widespread acceptance of the importance of the marketing concept, the precise marketing activities and competencies that contribute most strongly to business performance must be identified for small and medium-sized enterprises (SMEs).Our study of SMEs revisits many of the questions Homburg et al. (1999) posed in their study of the marketing functions of large firms, but we direct our attention to SMEs. The definition of what precisely constitutes an SME differs widely. For the purposes of this paper, we follow the Journal of Enterprise Culture and the Asia Pacific Economic Cooperative (APEC), who define US-based SMEs as companies with fewer than 499 employees (APEC, 2003). In this research, we seek to answer the following questions:* Relative to other business functions, how important is the marketing function in SMEs?* Does marketing in SMEs enjoy the same influence found in larger firms?* What internal and external factors influence the role of marketing in small firms?This paper reports the results of a study that employed an activity-based approach whereby demonstrable marketing competencies were related to the entrepreneurial orientation evident in the firms. The analysis demonstrated that certain competencies were more strongly associated with a marketing orientation, whereas others were associated with an entrepreneurial orientation. The results show differences in the role of marketing in small firms as compared with the role of marketing in the large firms that were studied by Homburg et al. (1999). SME marketing functions have a way to go before they enjoy the same degree of influence their counterparts have in large firms.
Literature review: Inquiry into departmental influence (such as marketing) is based on Cyert and March's (1963) work on subunit power. Cyert and March (1963) argued that managers have conflicting goals and seek satisfactory solutions as opposed to optimal solutions to business problems. The concept of "dominant coalition" has been used to explain differences in the power of firm subunits (Thompson, 1967). The relative power of subunits is a function of strategic choices made by dominant coalitions (i.e. power members) within an organization. Influence has been defined as "a change in one party that has its origin in another party and this embodies the successful exercise of power" (Stern and Scheer, 1992, p. 260). Our study focuses on the marketing department subunit. Various authors have examined the role and influence of marketing in firms (e.g. Becherer et al., 2003; Berthon et al., 2008; Moorman and Rust, 1999; Simpson and Taylor, 2002; Webster, 1981, 1992, 2003; Webster et al., 2003) and since the 1980s, the marketing department has been shown to have varying levels of influence in the firm. Homburg et al. (1999) directly studied the influence of marketing in large firms and uncovered a number of situational factors that affect the level of marketing's influence. These situational factors include a firm's strategic orientation (differentiation versus cost leadership) and background of the CEO. Although their study helped to define the relative level of influence of the marketing function for business, the authors focused on large organizations - firms with more than $25 million in sales.However, SMEs have characteristics that differentiate them from large organizations (McCartan-Quinn and Carson, 2003). These differences include advantages such as greater flexibility, innovation, and lower overhead costs. In terms of disadvantages, SMEs are limited by their market power, and capital and managerial resources (Motwani et al., 1998). These differences raise the question, "does the marketing function in SMEs enjoy the same influence found in larger firms?" Compared with the literature stream surrounding the role of marketing in larger firms, inquiry on this topic has been more limited for SMEs. Many studies have attempted to define marketing and outcomes of marketing for SMEs. Carson (2001) and Sui and Kirby (1998) traced the evolution of marketing and the various approaches to SME marketing. Other authors have attempted to develop hypothetical and empirical models of marketing for SMEs. Sui et al. (2004), Julien and Ramangalahy (2003) and Berthon et al. (2008) showed how strategic marketing practices such as knowledge of current market conditions and consumer tastes were positively related to SME performance. Becherer et al. (2003) examined internal environmental factors such as the background and decision processes of CEOs. One aspect of marketing, promotional efforts, was found to be a key influence in performance of SMEs (Wood, 2006). Market orientation as a driver of SME business performance has also generated scholar interest (Blankson and Stokes, 2002; Fillis, 2002; Pacitto et al., 2007). Finally, authors have studied underlying reasons for the characteristics of SME marketing practices. Simpson et al. (2006) examined drivers of marketing effort such as the presence of a marketing department and marketing representation at the board level. SMEs lack the resources to compete head-to-head with larger rivals and thus cannot do traditional marketing (Gilmore et al., 2001).Because SMEs lack the resources to compete head-to-head with larger rivals, some scholars have questioned whether SMEs formally practice marketing at all (Carson et al., 1998; Gilmore et al., 2001; Shepherd and Ridnour, 1996). In a typical argument, Hogarth-Scott et al. (1996) considered most marketing theories to be inappropriate for SMEs and not helpful in the understanding of their markets. However, in the same article, the authors found that the marketing function contributed positively to small business success and the ability to think strategically. Clearly, marketing department exert influence in SMEs, especially in SMEs that succeed. Larger firms achieve better competitive positions than smaller firms when they have greater marketing capabilities (Grimes et al., 2007), so SMEs must maintain strong marketing departments to effectively compete.Evidence has shown that although marketing activities in SMEs may be different, marketing departments are still critical to firms' success. Many firms carry out business via highly informal, unstructured, reactive mechanisms, whereas others develop, over time, a proactive and skilled approach where innovation and identification of opportunities gives them a competitive advantage (Fillis, 2007). Romano and Ratnatunga (1995) observed that "small firms face marketing challenges which can and will ultimately determine their future." Furthermore, Hill (2001) found that SMEs still engage in many traditional marketing functions, especially marketing planning. A widely cited marketing activity for SMEs is networking. Wincent (2005) showed that the size of a firm matters in regard to how a company conducts its networking activity. Networks are important during the establishment, development, and growth of SMEs. Siu (2001) found that SMEs rely heavily on their personal contact network in marketing their firms. Traditional economic structures favor size (large firms). However, today's economy is earmarked by relationships, network, and information, which play to some of the characteristics of SMEs.Our research adds to the SME literature stream by examining outputs and attributes of marketing (e.g. sales, percent of sales spent on marketing) and by considering how these are related to firm characteristics such as entrepreneurial orientation.
Conceptual foundation and hypotheses: Two theories provide a conceptual framework to explain the differences in the influence of the marketing function: institutional theory and contingency theory. Institutional theory states that organizational structure and design is inexorably linked to social networks and is shaped by conformity and legitimacy pressures (DiMaggio and Powell, 1983; Meyer and Rowan, 1977). Firm structure does not necessarily reflect efficiency but legitimacy (Meyer and Rowan, 1977). Thus the influence of a firm function such as marketing is a function of conformity/legitimacy pressures.Alternatively, contingency theory states that organizational structure and design is a function of external factors. Firms must "fit" to their environment, and characteristics of firms such as subunit power are a function of environmental determinants. Research on subunit power has explored relationships between environmental factors and organizational structures/power (Boeker, 1989; Hambrick, 1981; Hinings et al., 1974; Salancik and Pfeffer, 1974).This research tests elements of both theories. Hypotheses H1 and H2 are based on the contingency theory argument, and hypotheses H3, H4, and H5 are based on institutional theory.Consumer versus industrial markets
Research design: Sample
Findings: The results from the two samples were analyzed (t-tests) and no statistical difference was found in means across all measures. Accordingly, the results were pooled.The role of marketing
Discussion/limitations: Our data suggest that marketing is not as well developed or influential in SMEs as it is in large firms. Marketing's scope or extent of responsibility is more limited compared with large firms. Two factors, type of market (consumer) and firm orientation (hierarchal), facilitate marketing's influence within a firm. These results indicate that both underlying theories - institutional and contingency - received support and neither theory can be dismissed, suggesting the influence of marketing is a multidimensional construct. Individual traits (e.g. charisma, power, authority) were not considered in this study. However, our findings show marketing's influence can vary systematically as a function of institutional and external factors.Although our analysis sheds light on some internal and external factors that influence marketing's role in a firm, a major portion of variance remains unexplained. Additional research is necessary to explore other potential influences on marketing's role in a firm. These might include personnel factors (background and experience of marketing department personnel) and firm politics. For example, Reijonen and Komppula (2007) studied SMEs in Finland and found the motivation of entrepreneurs are not solely financial and often are based on job satisfaction, control, and family concerns.Although this research fills a gap in the literature relating to the influence of marketing in SMEs, there is much work to be done. We did not address the relationship between marketing's influence and the performance of firms. Analyses done and conclusions reached in this research were based on a very limited sample of SMEs located in one region of the country. Sampling across all regions of the United States would help in generalization of the findings.
Conclusions: The results of this study are particularly troubling because marketing resources are one driver of competitive advantage, and although the studied firms have a formal marketing function within the firm, it would appear the function has not proven its value to the firm. For marketing to increase in influence, individuals leading this function must gain a seat at the table. Our findings would suggest that changing the influence of the marketing function might take considerable effort. One barrier to changing marketing's influence is the firm's entrepreneurial orientation, which can be a deep-seated cultural factor that typically requires time to change. An Indian proverb cautions: "Under a bright lamp, there is great darkness." Although marketing departments have a responsibility to market the firm's products and services, the task of marketers marketing themselves to internal stakeholders remains unanswered. Marketing departments must do more to ensure that the voice of marketing is heard when key decisions are being made.
|
Multivariate robust estimation of inequality indices
|
[
"Development",
"Distributive justice",
"Income distribution",
"Welfare economics"
] |
Summarize the following paper into structured abstract.
1. Introduction: A single "extreme" observation can make an inequality index estimator uninformative, i.e., meaningless. Frequently, survey data are used to calculate inequality within an economy; we expect that survey data will contain both outliers and so-called contamination points. The main purpose of this research is to provide a new methodology to estimate inequality indices robustly; that is, to provide an estimator of inequality that is not severely affected by atypical data (outliers/contamination points). Our procedure uses information from multiple variables (e.g. socio-economic and geographic variables) to eliminate the effect of atypical observations. To our knowledge, in the inequality literature this is the first methodology that is able to identify outliers and contamination points in more than one direction[1].
2. Relationship to the existing literature: In an inequality framework, Cowell and Victoria-Feser (1996) used the concept of the influence function (IF) (Hampel, 1974) to assess the influence of an infinitesimal amount of contamination upon the value of an inequality statistic. They found that most inequality measures are not robust, i.e. a single arbitrary large observation can make an inequality index totally uninformative. This implies that the quality of inequality estimates derived from classical statistical procedures can be quite poor. It is therefore necessary to develop robust procedures in order to estimate confidently how the total income in a given society is distributed.
3. Methodology: This research will produce robust inequality estimates by first trimming the "raw" data using robust procedures, and then calculate classical inequality indices using sample moments' analogues in the "clean" subsample (i.e. after dropping atypical data). This is an innovation in the literature. The next subsections explain the methodology applied to do this and the very last subsection in this section briefly discuss some alternative approaches.
4. Results: Before applying our methodology to real data, we would like to compare our procedures vs the other robust inequality estimation (OBRE). Therefore, we conduct the same simulation that Cowell and Victoria-Feser (1996) proposed. The next subsection will describe that simulation exercise. We also performed artificial contamination exercises with real data to show that we can effectively tag contamination points, see Appendices 1 and 2 for those results.
5. Conclusions: In this paper we consider the estimation of income inequality indices (such as Gini, Theil, Coefficient of Variation) using data with some atypical observations. It is shown that even when a small portion of the data are atypical, classical statistical procedures are not reliable in estimating income inequality indices. We propose two procedures to solve this problem: MCD trimming and LTS trimming. Both methodologies use all the information from a multivariate data set to remove atypical observations in order to estimate inequality indices robustly.
|
Learning about environmental issues: Postgraduate and undergraduate students' interpretations of environmental contents in education
|
[
"Environmental engineering",
"Education",
"Higher education",
"Learning"
] |
Summarize the following paper into structured abstract.
Introduction: Looking across Swedish higher education, the last few years have seen an increase in the number of environmental courses being offered at the undergraduate and postgraduate levels. According to a survey conducted by the National Agency for Higher Education (2001), 3 of 37 universities and university colleges offer introductory courses in "sustainable development" that are open to all students. Fourteen educational institutions offer specific environmental introductory courses to a select group of undergraduate students and another 16 universities give undergraduate courses that include environmental issues or sustainable development. Moreover, several Swedish universities offer programmes in "Environmental science" leading to a BSc or an MSc degree, and there are also a number of civil engineering programmes that focus on environmental engineering.In this paper, the results of which draw from a doctoral thesis (Lundholm, 2003), students' perspectives on environmental education are presented. It is based on three case studies in which the aim was to explore the students' conceptions of ecology and environmental issues, and the learning process. Learning is viewed from an intentional perspective, which means that the students' aims in their studies, here defined as "projects", are explored through analysis of their actions and utterances in educational settings or interviews.
Research on learners and learning about environmental issues: Previous research on learners and learning in environmental education has focused mainly on the learners' knowledge, attitudes and behaviour in relation to environmental phenomena and environmental problems[1]. A number of studies on young people's understanding of environmental problems were concerned with investigating their scientific knowledge (Gomez-Granell and Cervera-March, 1993; Gambro and Switzky, 1996; Ivy et al., 1998). Boyes and Stanisstreet (1993, 1994, 1998) have studied young people's knowledge of the greenhouse effect and ozone depletion in particular.Research on students' learning has focused on the effects of environmental education, and has often been conducted as pre-test and post-test studies. Only a few investigations have been concerned specifically with students' personal interpretations and experiences of environmental education (Lai, 1999; Rickinson, 1999) or with the learning process (Emmons, 1997; Mason and Santi, 1998; Grace and Ratcliffe, 2002).Research on subject-related knowledge in environmental education has had its primary focus on the natural sciences, whereas the social science disciplines have received little attention from researchers in this field (Rickinson, 2001).Within much of this work there is a very strong science education influence. The environment phenomena that have been investigated are ones from school curricula, the research participants are often from science teaching groups, and the implications drawn often take the form of strategies for science teachers (Rickinson, 2001, p. 219).
Purpose of the study: The purpose of the study was to investigate and describe students' interpretations of various tasks concerning environmental issues and of course content in ecology. The purpose was also to investigate the students' learning, defined as a process of differentiation between contexts.The thesis attended to the following research questions:RQ1. How do six first-year civil engineering students interpret a course content in ecology and environmental issues; i.e. how do they contextualise the content?
Theory: Learning as a process of differentiation
Method: Case studies
First-year civil engineering students' interpretations of a course content in ecology and environmental issues: The engineering students' interpretations can be described in terms of a theoretical context in the sense that the students related the content to concepts and different academic subjects within the natural sciences[3]. The content also raised issues concerning human beings in relation to nature. The students discussed the notion of humankind as being within or outside the eco-system, and this seems to have led them to consider the moral aspects of human actions and the effects of these actions on nature. This means that the students' interpretations of the course content can be said to be embedded within a cultural context of values and norms. The students seemed to believe that the teacher had, implicitly, expressed the viewpoint that human beings have affected nature only in a negative way and that the teacher had neglected to highlight the positive effect which that impact has had on humans. In talking about these aspects of values, the students showed irritation and indignation and seemed to find the whole issue quite provoking.In the analysis of the different ways in which the students interpreted the course content, the results show that during the lectures the engineering students posed questions to the teacher concerning possible solutions to various environmental problems and whether these solutions were good or bad and could eventually lead to improving the environment. Finally, the students also focused on different aspects of the course content, such as the water cycle; they also stated that learning about such matters as the chemical formulas of the phosphorus cycle was irrelevant for their future profession as civil engineers.The students' different contextualisations of the course content, as well as their questions and focused aspects of the content, were analysed from an intentional perspective. The students' actions and utterances become meaningful if we interpret them in relation to their future goals, i.e. their projects. Their interpretations and reactions to the course content become intelligible and possible to understand in view of the professional project of becoming a civil engineer. As civil engineers they will inevitably affect nature in some way, good or bad. An ecocentric perspective that places a higher value on nature than on human beings, together with a view that implies that the effect humans have on nature is harmful, may come into conflict with the very notion of being a civil engineer. The questions that were posed in the classroom concerning solutions to environmental problems can also be understood as being both part of a professional project and a possible project related to a more general interest in wanting to know what could be done to improve the environment. Finally, the two students who stated that certain aspects of the content were irrelevant were arguing with reference to the civil engineering profession.
Biology students' interpretations of a task on environmental reports: When the students were working on a task concerning environmental reports, different contextualisations as well as different problems were brought to the fore. The students decided to work on a problem about environmental reports as a general phenomenon and to collect reports from different companies and analyse and study the text they contained. While the students were working on that problem, another problem, this one concerning the environmental work of certain companies was brought to fore. In the students' discussion, this problem meant that the students analysed and discussed the companies' work as such, and not the texts presented in the reports. In particular, one of the students in the group, Hans, clarified and stressed this difference.The students' work on analysing the texts was contextualised into an academic and cultural context. By setting up 15 criteria for analysing the 14 reports, the students' aim was to describe the reports' content concerning the companies' environmental work and how these routines were presented to the reader. While solving the problem, the students' personal judgements about, for example, the environmental education given to the staff became a topic for discussion. One student in particular emphasised the importance of keeping judgements out of the discussion, and argued that the group should try to describe the reports in terms of "what" content was being presented and "how" it was being presented to the reader. He also stressed that the group should not evaluate the different aspects presented in the reports, for example, concerning environmental education, as being good or bad. Eventually, on the last day of working on the task, the students discussed the difficulty of trying to keep their values and judgements as well as their feelings about the different companies separate from the analysis.In analysing the students' interpretations of the task from an intentional perspective, the different contextualisations, as well as the different problems, can be discussed in relation to the students' projects. A possible project that can be ascribed the students is a concern for the environment and an interest in promoting and working towards a better environment. Whether this is a project of interest or of profession, or perhaps both, is difficult to say. The students' apparent interest in working with a problem that concerned the companies' business as such, and not just what was stated in the reports, is understandable in light of an environmental interest in finding out what is being done within the companies and if this will eventually improve the environment. The students' difficulty in differentiating between, on one hand, a cultural context of values and judgements about the companies' environmental work as it is described in the reports, and, on the other, an academic context can also be explained in relation to a project. If the students' project of interest is to promote environmental change, their will to not only describe environmental reports, but also analyse and judge them in terms of good and bad, becomes reasonable.
Postgraduate students' interpretations of environmental research: For the six postgraduate students the task of writing a doctoral thesis raised different problems, which were possible to analyse in relation to the students' different projects. Firstly, the thesis was interpreted as an assignment to be attended to within the university and the scientific community. This problem can be understood as being part of the students' educational project. Second, for five of the six postgraduate students, the task of writing a thesis was commissioned by a company outside the university, and the research results were to serve as a means for companies to take action; writing a thesis was thus interpreted as a question of more or less producing a plan of action. Third, writing the thesis was a project of personal interest in the sense that the students felt they could contribute to improving the environment. Three of the students also stated that their postgraduate studies would benefit them in their professional work and that this was a contributory reason for their taking on postgraduate studies.In solving the problem of writing a doctoral thesis within the educational project, some difficulties arose. To be able to answer their research questions, two of the students, Elisabeth and Sofia, had to acquaint themselves with subject areas in the social sciences. These subjects were quite new to them, as were the qualitative methods they were expected to use in their research. They realised that knowledge about how to conduct qualitative research was scarce at KTH. Therefore, the students had to learn on their own and try to overcome this obstacle in different ways. Another student, Mikael, stressed that he had chosen to narrow down his research and exclude questions that would lead him too far afield into other subject areas. He argued that too wide a research question could not be handled in a doctoral research project.The field of environmental research evoked thoughts about identity and feelings of uncertainty. Elisabeth discussed the fact the her background was in civil engineering and that civil engineers traditionally had not engaged in the kind of environmental research that she was now conducting. Sofia, on the other hand, was not a civil engineer and was worried that her natural science background would pose problems. However, she stated that any student entering her research field and having to learn a new subject, in her case macroeconomics, would have been confronted with the same difficulties regardless of whether their backgrounds were in civil engineering or natural science.In working on her thesis, Elisabeth had come to realise that she and the Hydropower Company that had given her the assignment interpreted the concept of "sustainability" in different ways. She also discussed the fact that she had personal beliefs and values about the issues she was writing about and was worried that these would influence her research in an inappropriate way.
Comparison of the results in the three case studies: Contextualisations and differentiation
Discussion: The findings raise the question of how affects can influence the learning process, and the students' possibilities of differentiating between various conceptual frameworks. In relation to research on cognitive development as a process of conceptual change where the student abandons conceptions due to "dissatisfaction" (Posner et al., 1982, p. 214) and finds new conceptions "intelligible" and "plausible" (Posner et al., 1982, p. 214), it can be questioned if learning is such a rational and intellectual process. If we consider an intentional perspective when looking at learners and learning we can perhaps better understand the rationale and logic of their way of reasoning and understanding environmental issues.The results can be discussed in relation to educational practice and the content, as well as the purpose, of environmental education. If environmental education can raise these kinds of moral and ethical conceptions concerning nature and man's behaviour, as well as judgements about companies, ethics or discussions concerning values should be part of the curriculum or in some way highlighted and recognised. Perhaps the engineering students would have benefited from a discussion on different ethical perspectives on nature as well as the ethics of professional engineers and possible dilemmas they will meet in their future career. Furthermore, the students' focus on solving and finding solutions to environmental problems is an issue to consider in education. Whether the content includes aspects of the ways in which environmental problems are dealt with in society or not, this focus of interest will probably affect the learning process.
|
Improving ITIL compliance using change management practices: a finance sector case study
|
[
"Information technology",
"Service management",
"Information technology infrastructure library",
"Organizational change management",
"Organizational change",
"Change management"
] |
Summarize the following paper into structured abstract.
1 Introduction: Service firms are searching for ways to deliver higher levels of information technology (IT) service as well as demanding more from their information systems (IS) groups, expecting quick responses to new business opportunities, to support critical processes efficiently and to satisfy their customers and internal staff. Hence there is a growing interest in adopting IT service management (ITSM) and best practices such as the Information Technology Infrastructure Library (ITIL) (Galup et al., 2009).ITSM is defined as "the implementation and management of Quality IT Services that meet the needs of the business" (ITIL, 2007). ITSM is a discipline for managing IT service operations, which is process orientated, sharing this theme with other quality improvement methods and as such it is different from traditional technology-oriented approaches due to its focus on customer relationships (Galup et al., 2009). A main benefit reported by ITSM adopters is an increase in customer satisfaction (Cater-Steel et al., 2009). ITSM concepts are often implemented through the ITIL framework; a sector specific quality and best practice guide on the management of information technologies (Iden, 2012). The operational benefits expected from ITIL implementations are the alignment of IT services with business needs, improved quality of the IT services themselves, and a reduction in the long term costs of service provision (McNaughton et al., 2010).One of the main challenges faced by companies during the adoption of ITIL is the organizational change required to attain a service-oriented culture (Cater-Steel et al., 2007; Spafford and Holup, 2010). Various practices to facilitate organizational change are proposed in the academic and professional literature (Buchanan et al., 2005; Kanter, 2001; Kerber, 2001; Kotter, 1995; Spafford and Holup, 2010), and most of this literature argues that the use of change management practices (CMPs) has a positive effect on the speed and quality of the change process and on results for the organization. Nevertheless, there is little published in terms of these practices as they relate to ITIL implementations.Therefore, this paper presents the findings of a multiple case study investigation, which examines the experience of four Peruvian financial companies implementing the ITIL framework. The cases offer insight into how the organizational change management processes supported the ITIL implementation and identifies which organizational CMPs were more frequently used.The paper is organized as follows; next Section 2 introduces the ITIL framework, followed by Section 3, a review of the literature on CMPs concluding with the study's research questions. Section 4 provides details of the methodology used. Section 5 provides the case profiles, the findings and the cross-case analysis. Finally, the conclusion sums up the findings, presents the studies limitations and offers a direction for future research.
2 Information technology infrastructure library: ITIL is a framework that describes best practices and provides guidance on the management of IT processes, functions, roles and responsibilities in ITSM. It was first developed in the 1980s by the UK Government agency, the Office of Government Commerce (OGC) in response to the government's growing dependence on information technologies (Galup et al., 2009), to promote efficient IT operations and to improve IT service delivery and operations within government controlled computing centers (Salle, 2004).Version one of ITIL consisted of 40 volumes covering "best practices" in different areas of IT service provision. In 2000, ITIL version one was replaced by the seven volume ITIL version two (ITILv2) consolidating the practices within an overall framework. The two primary components of ITILv2 are service delivery and service support which consist of ten core processes and the service desk function (Table I).In 2007, ITILv2 was further enhanced to become ITIL version three (ITILv3), now consisting of five volumes: strategy, design, transition, operation and continuous improvement, which extend the ITILv2 model and processes by organizing them around a service lifecycle model.2.1 ITIL adoption and implementation
3 Organizational CMPs: Kerber (2001) identifies three approaches used in organizations to enact organizational change:1. Direct change, which is motivated from senior management, wielded by authority and achieved through compliance.2. Planned change, which originates from any level in the organization but is managed through the top management layer and that seeks the commitment and involvement of the organization through CMPs that mitigate resistance.3. Guided change, arising from the within the organization and the through commitment and the contribution of staff toward the organizational objectives.Kerber and Buono (2005) argue that the effectiveness of the approach to the management of change depends on contextual variables such as:* the complexity of the business environment;* socio-technical uncertainty of the tasks or problems;* the ability to change the organization; and* the risks associated with the alternatives of not changing or changing slowly.Other factors may add complexity to the management of change, for example the cultural context (Martinsons et al., 2009) and the interaction between different agents of change namely; senior management, middle management, external consultants and work teams who have different experiences and perspectives (Caldwell, 2003; Choi et al., 2011). On the other hand, management support is considered key to the success of organizational change (Abraham et al., 1999; Spafford and Holup, 2010), as is the allocation of adequate resources (Becker, 2010). Although, following a change program is not a guarantee of successful outcomes (Choi et al., 2011).Kotter (2007) presents a set of eight sequential CMPs that allow for successful organizational change. Kotter's approach is well known and has been applied frequently (Aiken and Keller, 2009; Smith, 2011) (Table II).CMPs are commonly proposed in temporal sequences such as preparation, implementation and consolidation (Raineri, 2011). Research has found that change agents show a preference for CMPs related to the preparation stage because; it is easier to prevent errors in early stages, which lessens implementation failure at later stages (Holt et al., 2007); they have the ability to place more resources at the beginning of a change project (Raineri, 2011) and; the preparation stage requires more analytical skills of which managers tend to have more training in, compared with the more political and interpersonal skills required in the later stages of implementation (Shipper, 1999). Similarly, Raineri (2011) found a higher frequency of preparation practices in comparison with the implementation stage and that the use of CMPs has a significant impact on objectives and deadlines achievement. Abraham et al. (1999) found that the key factor in achieving change towards a culture of quality is top management support, with the lesser factors of the clarity of vision, participation, communication and appropriate resources. Becker (2010) found that the understanding the need for change, the level of organizational support and training, measuring the change, the positive experience and informal support, a history of organizational changes and the outlook previous expectations and individual feelings are factors that influence adoption.Considering the lack of focused research on ITIL implementation and managed change it would seem useful to investigate this phenomenon by studying firms that have implemented the ITIL framework and assess which organizational CMPs more frequently used and whether their influence on the ITIL implementation outcomes can be observed. Therefore, two research questions were formulated for this study:RQ1. How did financial companies in Peru that have implemented the ITIL framework use organizational CMPs?RQ2. How are the levels of ITIL compliance achieved by financial companies in Peru influenced by the CMPs that were used?
4 Research method: The study employed the multiple-case design method proposed by Yin (2009) as the principal method to gather data to answer the research questions. The case study is an appropriate research method to analyze a phenomenon in its natural environment when the researcher has no influence over events and when the context is considered relevant (Benbasat et al., 1987; Yin, 2009). The case study methodology allows for in-depth questioning and the capture of important aspects of the complexity of an organization. But it is recognized that the conclusions of an investigation by the case study method can be limited, with the generalization of the findings being a common criticism (Gable, 2010; Lee, 1989). However, the problems of validity and generalisability can be addressed by using competent research methodologies (Meredith, 1998; Eisenhardt, 1989).The case study method was considered appropriate for this study as it allowed the phenomenon, the ITIL implementation, to be studied in its natural setting (Meredith, 1998), to have questions of the "how and why" type posed and answered (Yin, 2009) and it is an appropriate methodology for studying new topic areas (Eisenhardt, 1989). The design of the investigation involved four sites each studied in depth using multiple data sources and a range of collection methods (Eisenhardt, 1989).A single industry was selected for the study, in order to control the influence of contexts and for different environments and resource availabilities across industries. Four companies from the Peruvian financial sector that had implemented the ITIL framework were identified. The focus of this study is on the ITILv2 implementation for while the ITILv3 * was recently released, ITIL v2 is still the most commonly used (Pollard and Cater-Steel, 2009). The four companies had each implemented the 11 processes of ITILv2 and in all the cases, the ITIL implementation projects were supported by management and had been allocated sufficient of resources for completion.Data regarding the implementation were collected through separate, semi-structured interviews with various staff involved in the project including, top management, operations and IT managers and users of the IT services. Confirmatory data were gathered through secondary sources such as internal documents and progress reports.Through a review of the literature on research related to ITIL adoption and practice of organizational change management a list of questions based on the change management framework of Kotter (2007) was developed. The survey instrument was developed in the Spanish language by native Spanish speakers and all of the respondents speak and write in Spanish as a normal part of their work. The interviewees were given the set of open-ended questions about their strategy, motivations, problems and use CMPs used during the ITIL implementation, before the interview. During the interview, a discussion following the question format was initiated with the answers to these questions recorded digitally with additional notes taken by the researchers. Raineri (2011) found a likely bias from interviewees who were also project implementers that they would be more inclined to say they had used CMPs. Therefore, to reduce this bias the data were corroborated with other participants of the ITIL implementation.The interviews were transcribed verbatim and underwent content analysis. Responses were coded against the question structure and stages of CMPs implementation following the seven steps allowing the researchers to gauge CMP usage for each case. To guard against bias in the data analysis, each interview transcript was content analyzed twice to verify the results. Each case was then written up and subjected to a cross-case analysis, identifying the similarities and noting differences in themes and patterns across the cases.To determine the level ITIL compliance the researchers selected the itSMF self-assessment tool (itSMF, 2003). The itSMF instrument compares an organizations performance with the ITIL best practices, assessing the level of compliance with the ITIL processes, producing its results based on a framework of capabilities through a five point scale (McNaughton et al., 2010). Case firm IT managers were each given an itSMF questionnaire to complete. The questionnaire results provided each case with a score, between 0 and 5 for each of the ITIL core component's compliance to best practice, enabling case comparison.
5 Results: 5.1 Case profiles
6 Discussion and conclusions: This study aimed to investigate the importance of applying managed change as part of an ITIL implementation. To this end four Peruvian financial sector firms were chosen as cases and a multi case study investigation was undertaken. The study undertook to ascertain which CMPs were being used and their level of influence on ITIL compliance.This study finds that not all firms take full advantage of CMPs while implementing the ITIL framework. The cases showed that higher levels of CMPs achieved higher levels of ITIL implementation compliance. This result suggests that the use of CMPs has a positive impact on the outcomes of ITIL implementations. It was also found that the firms that used less CMPs reported the resistance to change during the implementation. However, it important not to discount that the higher scoring firms may have had valuable or rare resources and competencies which they utilized in times of change giving them an advantage over others (Barney, 1991).The case studies also demonstrate that more change practices related to preparation and fewer change practices related to implementation and consolidation were used during the ITIL implementation process. This result may be attributable to institutional strengths of the firm, for as case D showed, it seemed to have institutional policies and practices that supported the ITIL implementation across all of its phases, and case B seemingly having the least commitment in terms of resource allocation, sustainable change and staff involvement. Likewise, the participation of an external consultant during the adoption of ITIL may favor the more frequent use of practices related to all three change stages (Pollard and Cater-Steel, 2009) allowing those firms that used consultants to use more CMPs. In addition, the choice of implementation methodology may contribute to some practices being more evident than others. All of the four case studies presented applied a phased implementations; a strategy, which favors the use of the change practice the "generation of short-term results".The main limitation of this study is the small number of cases analyzed, so the findings should be viewed with caution. The cases support the theory that managed change assists the implementation of ITIL processes. However, it was undertaken in a single industry, which tends to have homogenous processes and systems and the firms were all of a similar size. However, when examining cases of different size, system processes and industries divergent results may be produced, but these results are necessary to properly extend the case theory in question (Meredith, 1998). Future research may wish to extend the themes of this paper to design and undertake research on which CMPs provide the most impact to the levels of ITIL compliance using a suitable quantitative research instrument and method.In conclusion this paper makes three contributions. First it provides an exploratory study of the links between ITIL implementations and the use of CMPs. Second in terms of results it provides evidence of a link between ITIL implementation and managed change, which is important information for practitioners who may be planning future ITIL projects. Finally, this study adds to the literature of ITIL implementations, particularly with respect to organizational change.
|
Organizational unlearning as changes in beliefs and routines in organizations
|
[
"Learning organizations",
"Memory",
"Organizational culture",
"Organizational change"
] |
Summarize the following paper into structured abstract.
Introduction: Rapid changes and unpredictable events occur in the business environment. These changes are, in part, the result of market growth or technology development, and they create turbulence that can destroy the existing competencies of an organization (Tushman and Anderson, 1986). In such a changing environment, organizations find that their previous strategies, core competencies (Prahalad and Hamel, 1990), beliefs, values, and cultures (Moorman and Miner, 1997; Sproull, 1981) are becoming less effective, or are no longer effective at all. These core competencies, which might have taken many years to develop and refine, can become core rigidities (Leonard-Barton, 1995), and can hinder an organization's ability to compete and win. Hedberg (1981) has noted that knowledge becomes obsolete under turbulent environments and must be renewed. Hedberg (1981) has defined this renewing activity as "unlearning," and has emphasized that an inability to unlearn is a critical weakness of many organizations. Hamel and Prahalad (1994, p. 71) have also noted, "Companies are going to have to unlearn a lot of their past - and also to forget it! The future will not be an extrapolation of the past."Unlearning is vital for organizations to learn how to survive and compete in the competitive landscape (Nystrom and Starbuck, 1984; Hedberg, 1981). Specifically, organizations make it more difficult to learn without first unlearning (Hedberg, 1981). Organizations have widely established and accepted beliefs and methods that persuade them to neglect important new technologies and markets, because they have a great emotional investment in old ways of working. Those established beliefs and methods create rules and competency traps that negatively affect the operations of organizations (Mezias et al., 2001). In particular, those procedures, methods and beliefs that inhibit the reception and evaluation of new market and technology information, and reduce the value of perceived new information. For instance, Day (1994, p. 24) has stated that: "The presumed correctness of past actions and interpretations is reinforced by repeated success, and ensuing complacency breeds rejection of information that conflicts with conventional wisdom." Also, people focus on information that supports their current beliefs and methods. However, incorrect beliefs and methods assimilate errors in judgment and actions because people alter the perception of reality to fit their beliefs and methods (Rousseau, 2001). Fixed beliefs lead to perception rigidity or inaccurate causal attributions, and these result in organizations becoming slower to recognize changes (Dickson, 1992). For instance, Starbuck (1996) points out that:People in organizations find it very difficult to deal effectively with information that conflicts with their current beliefs and methods. They do not know how to accommodate dissonant information and they find it difficult to change a few elements of their interdependent beliefs and methods.Even though unlearning is important, it is problematic due to difficulty of conceptualizing, operationalizing and testing it in the current scholarship. As Crossan et al. (1995) have pointed out, the "unlearning" concept has generated great interest, but it has received limited acceptance in the literature due to confusion regarding the terms "learning" and "unlearning" both in theory and in practice. In particular, confusion was enhanced when the term "unlearning" was used as the reverse of "learning." One of the reasons for this confusion is that most of the arguments on the unlearning has been conducted in the organizational learning literature (Akgun et al., 2003). However, the unlearning concept is not just bounded by or restricted to the organizational learning literature. Since, March's organizational adaptation theory (Cyert and March, 1963) and Lewin's (1951) organizational change theory, unlearning has made claim as the heart of the organizational change process (Walsh and Ungson, 1991). For example, scholars in the school of organizational change and memory pointed out that unlearning and memory are closely related concepts, because memory creates competency traps due to fixed and well-accepted routines and values in organizations (Levitt and March, 1988; Moorman and Miner, 1997). Accordingly, an understanding of unlearning can be leveraged, reducing the confusion between unlearning and learning by reviewing organizational change and memory literature.The purpose of this paper therefore is to shed light on the unlearning concept based on the organizational memory and change literature. Specifically, the research questions addressed in this study are:RQ1. How the unlearning concept can be conceptualized and operationalized for future empirical investigations.RQ2. What types of unlearning can be revealed based on the organizational memory and change literature to enhance the organizational learning and change management scholarship.In the sections that follow, this study:1. explains the concept of unlearning by reviewing different streams of research;2. argues that unlearning reflects two underlying components, changes in beliefs and routines; and then proposes that the combination of these components of unlearning creates distinct types of organizational unlearning based on environmental contingencies; and3. provides guidelines for future research.
What is unlearning?: The concept of "unlearning" has attracted many researchers from diverse fields for the last 40 years with numerous studies coming from the individual learning and cognitive psychology literature. At the individual level, unlearning has been discussed as part of verbal learning psychology and individual cognition. In the verbal learning psychology literature, unlearning has primarily been viewed as deleting and replacing old stimuli by interpolated learning, which is also called interference theory (Postman et al., 1965). Scholars in this stream of research (Postman et al., 1965) have mentioned that unlearning is the gradual weakening of associations between stimuli and responses by retroactive inhibition.In addition to the verbal learning psychology, the cognitive science literature has viewed unlearning as changes in belief structure (Fiske and Taylor, 1984), mental model (Johnson-Laird, 1983), frame of reference (Shrivastava and Schneider, 1984), or cognitive maps (Walsh, 1988). Moreover, most of the studies attributable to unlearning in individual psychology and cognition have focused on memory loss (Hull, 1943), deterioration of the trace in memory (Koffka, 1935), intentional forgetting (Freud, 1943), and retroactive inhibition (Postman et al., 1965). Scholars have thus viewed unlearning as "memory eliminating" in a system (that is, the individual) (Greeno et al., 1971), and that this elimination helps bring about new learning, e.g. acceptance of novelty (new knowledge).Apart from studies at the individual level, unlearning has also been studied at the organizational level, especially in the scholarship on organizational change and memory (Lewin, 1951; Huber, 1991; Bartunek, 1988). Studies on unlearning at the organizational level reveal a similar, although not identical, pattern to that found in the scholarship on individual unlearning - that is, the elimination of memories (Walsh and Ungson, 1991). Scholars in the school of organizational memory have viewed organizational unlearning in various ways as:* changing acquisition processes and possible retrieval processes in the organizational memory (Walsh and Ungson, 1991);* disruption and re-creation of portions of the organization's memory (Anand et al., 1998);* purposely eliminating memories (Stein, 1995);* dissembling and disconfirming the causal connections in the organizational memory (Nicollini and Meznar, 1995); and* disintegrating the community's collective infrastructure of routines (Blackler et al., 1999).Thus, the common theme, to-date, for the unlearning, both in individual and organizational level studies, is:* eliminating memory via disconfirmation;* the disassembly of the connections and mechanisms of memory; and/or* changing how memory is manifested.
Organizational unlearning and memory: Because unlearning has been conceptualized as memory eliminating, an investigation of how memory is formed and manifested could help in understanding and operationalizing unlearning in organizations. Moorman and Miner (1997), for instance, proposed that organizational memory is manifested in three forms:1. organizational beliefs that include knowledge, frame of references, models, values, and norms;2. formal and informal behavioral routines, procedures, and scripts that include standard operating procedural, managerial, and technical systems, capabilities, and information-sharing mechanisms; and3. the organization's physical artifacts including tools, programming, assembly-line layout, and features of products and product lines (such as product design, materials, packaging, and logos).Therefore, logically, unlearning must include eliminating beliefs, routines, and physical artifacts in organizations based on the insights garnered from the cognitive psychology, organizational memory and change literature. However, this definition is too broad for a parsimonious operationalization of unlearning, and thereby raises three concerns, namely ontological, construct validity, and pragmatistic. These three concerns clarify the operationalization of the unlearning concept in a systematic fashion. Specifically, these concerns:1. provide scrutiny and scientific explanations, which have the stature and explanatory significance assumed for unlearning;2. demonstrate what makes up the unlearning concept that are observed both in academia and practice; and3. explain variables or constructs of which the unlearning is a function by applying the scientific principles of system thinking, i.e. these concerns are mutually dependent.Ontological concern
Organizational unlearning and change: Since, organizational unlearning is operationalized as changes in beliefs and routines, this operationalization is confused with organizational change as well. Indeed, organizational change and learning occurs through revision of organizational scripts, cognitive schema establishing behavior and routines, or behavioral recipes involving beliefs as indicated in action learning (Johnson, 1990), neo-institutional theory of organization (Greenwood and Hinings, 1996), social cognition (Akgun et al., 2003), and social psychology (Weick, 1979). Unlearning shares similar etiological properties with organizational change (Johnson, 1990). However, note that organizational change is a generic term and is a broad concept involving "analytical, educational, learning and political process as well as a process which combines rational, political and cultural elements" (Hendry, 1996). In essence, organizational change is an end state of a transformation process (Tsoukas and Chia, 2002). Nevertheless, unlearning in particular focuses on the memory eliminating. Organizational unlearning is a change in collective cognition and routines that coordinate organizational change process. Specifically, unlearning is a stage or catalyst (one that causes a process or an event to happen) in the change process to make it a dynamic process, as indicated by organizational change theories (Mezias et al., 2001). For instance, Lewin's (1951) model for change involves three steps:1. unfreezing, which is suspending the current structure and involves disconfirmation of expectations, learning anxiety, and provision of psychological safety;2. transition, which is changing the mental structure and involves cognitive restructuring, semantic redefinition, and new standards of judgment; and3. refreezing, which is adapting the new mental structure and involves creation of supportive social norms, and making change congruent with personality.The second stage of Lewin's model, transition, is indicative of the unlearning phenomenon. Gemmill and Smith (1985), also explained the elements of system changes as:* disequilibrium, which is the result of external and internal forces, such as crisis and chaos;* symmetry breaking, which refers to breaking down of existing patterns of interactions or system habits;* experimentation, which refers to creating different new forms or configuration to reformulate the system; and* reformulation, which is selecting a new configuration.In their model, the system breaking stage denotes the unlearning concept.For that reason, we purport that "unlearning is embedded in organizational change process. However, the aim of unlearning is not performance improvement per se; rather it is a catalyst for the change process."
Types of organizational unlearning: We argued that unlearning involves the combination of the changes in beliefs and routines and these two components of unlearning must exist in tandem for unlearning to occur effectively. Nevertheless, in practice, the magnitude of the changes in beliefs and routines may vary. For instance, an organization is likely to put a higher emphasis on changes in routines than on changes in beliefs. One of the reasons would be the environmental conditions, in particular, environmental turbulence (Schein, 1993; Hedberg, 1981; Nystrom and Starbuck, 1981). However, considering environmental turbulence as a unidimensional construct narrows our understanding of the dynamic unlearning process, requiring a multidimensional view of environmental turbulence. Environmental turbulence refers to the degree of change and unpredictability of an environment (Glazer and Weiss, 1993, p. 510). The degree of change involves more information, leading to multiple and conflicted interpretations about an organizational situation. This type of contingency requires more interpretation and a variety of schemas or belief structures. Unpredictability of environment is related to the time-sensitivity of the information. Information in a given period loses its value in subsequent periods. Since, different environmental conditions require organizations to use the appropriate unlearning process, we propose that environmental turbulence results in four types of unlearning process as shown in Table I. The labels used for unlearning are borrowed from Gnyawali and Stewart (2003). Reinventive unlearning occurs when organizations put high emphasis on both changes in beliefs and routines. Formative unlearning is when more emphasis is placed on the beliefs and less on the routines. High emphasis on the changes in routines and low on the changes in beliefs results in adjustive unlearning, and low emphasis on both changes in beliefs and routines results in operative unlearning.It is interesting to note that these types of unlearning make good connections with the literature on planned (discontinuous) change. The proposed types of unlearning provide a meeting point between theories of planned change and theories of continuous change as Weick and Quinn (1999) stated. For instance, Weick and Quinn (1999, pp. 381-2) pointed out that:Recent work suggests, ironically, that to understand organizational change one must first understand organizational inertia, its content, its tenacity, its interdependencies. Recent work also suggests that change is not an on-off phenomenon nor is its effectiveness contingent on the degree to which it is planned. Furthermore, the trajectory of change is more often spiral or open ended than linear. All of these insights are more likely to be kept in play if researchers focus on "changing" rather than "change." A shift in vocabulary from "change" to "changing" directs attention to actions of substituting one thing for another, of making one thing into another thing, or of attracting one thing to become other than it was.Indeed, the types of unlearning indicate that change is an on-going process, never ceasing, and provide a continuum between continuous change and planned (discontinuous) change. Also, the magnitude of changes in beliefs and routines, in each type of unlearning, directs one's attention to the actions that can substitute one thing for another.Reinventive unlearning
Discussion: As shown in Figure 1, this study:* denotes that unlearning can be operationalized as declarative and procedural memory eliminating - changing beliefs and routines in organizations based on the organizational change and memory literature; and* identifies different types of unlearning contingent upon the environmental conditions.Also, unlike that suggested by "common wisdom," which suggests that organizational unlearning is the opposite of organizational learning, Figure 1 shows that unlearning is an important sub-process of the organizational learning process and makes unique contributions to organizational learning literature (Akgun et al., 2003). Specifically, since, key contributions of unlearning in the organizational learning process have long been ignored in the literature, the legitimization of unlearning in the learning process was not forthcoming. Based on the organizational change and memory literature, we recognize that, unlearning:* catalyzes organizational learning process to foster a dynamic learning process;* provides a platform for shifting single-loop learning to double-loop learning; and* connects organizational learning and organizational change processes.As noted above, unlearning "catalyzes" the learning processes. The implication of the word, "catalyze" is that unlearning makes learning a dynamic process. As described by writers (Huber, 1991; Akgun et al., 2003), the organizational learning process encompasses continuous, reflexive and reciprocal cycle of activities that include memory, sensemaking, intelligence, thinking, emotions, improvisation, information acquisition, dissemination, and implementation. Since, it is a continuous process, no particular stage or sub-component can be taken as the starting point, therefore, missing or ignoring a component of the process halts the learning process in general (Akgun et al., 2003). In other words, all sub-components or processes work together rather than selectively. Unlearning facilitates a fluid and reflexive process of learning by changing beliefs and routines (Wijnhoven, 2001). For instance, organizational beliefs, which were created based on the positive feedback from a firm's success:* impose an interpretative inertia; and* facilitate inadequate reception of action signals.In addition, organizational routines, which indicate a motor, fixed or automatic action and behavior, pose an organizational inertia in changing and novel conditions. Routines reinforce the status quo (Nelson and Winter, 1982) and inhibit active seeking of alternatives. In this regard, unlearning makes the change and learning process unalterable, because old beliefs and routines were altered and replaced by a new set knowledge.In addition to the dynamic learning process, unlearning secures the learning levels. Specifically, unlearning is a means to shift from lower- (i.e. single-loop) to higher-level learning (i.e. double-loop). According to the organizational learning literature, single-loop learning or adaptive learning consists of detecting performance gaps and their elimination in line with standard operation procedures (Argyris and Schon, 1978). Here, an organization does not change its beliefs and routines. However, based on the environmental feedback, if an organization changes its beliefs and routines, one may say that double-loop learning has occurred. Here, unlearning provides a way for this transition, changing beliefs about cause-effect relations and operating procedures (Baker and Sinkula, 1999; Pawlowski, 2001). Wang and Pervaiz (2003), for instance, take a stronger stance and argue that organizational unlearning is needed to create quantum leaps.Finally, another critical role of unlearning is the interlocking of the organizational change and learning processes. There exists a common agreement that organizational change and learning are closely related concepts. For example, Pratt and Barnett (1997) have argued that learning is not a stand-alone concept; rather, it belongs to a family of change processes that encompasses unlearning and relearning (that is, adapting new mental models, procedures and routines). Indeed, based on contingency theory, the relation between an organization and its environment is the main concern of the theoretical perspectives of organizational change (Pawlowsky, 2001). At the same time, organizational learning requires changes in the relationships between organization and its environment (Hedberg and Wolff, 2001). Change involves some degree of learning, and learning provides a potential for change rather than any guarantee of its realization (Child and Heavens, 2001). Further, unlearning is one of the common dominators of change and learning process. For instance, Huy (2001) noted that beliefs and routines are the content of the change. Additionally, organizational learning concentrates on the process that brings such change about (Krebsbach-Gnath, 2001) and unlearning is one of the vital constructs of the learning process. In this perspective, unlearning acts as a linking pin between organizational learning and change processes.
Concluding remarks and future research: Unlearning is an important construct of organizational change and learning process, and thereby warrants an in-depth investigation. However, since the term "unlearning" was accompanied by the term "learning," such as "learning and unlearning," it was often seen as a demode construct or study area due to semantic usage of the terms and that it really is something different from learning, not simply its opposite. In this study, we further explored unlearning by reviewing organizational change and memory literature. However, our study is just one of a few attempts to discover the unlearning concept, see also Hedberg (1981), Nystrom and Starbuck (1984) and others. In future studies, we suggest that:1. Unlearning be investigated in the nomonological web of other organizational learning constructs. The recent research slightly mentions the niche of unlearning in the organizational learning literature (Akgun et al., 2003; Huber, 1991). For instance, according to Akgun et al. (2003), organizational learning is a process of the reciprocal relations among the information processing (i.e. memory, unlearning, sensemaking, etc.), intelligence and emotions. An empirical investigation of interwoven relations among the sub-components of learning would give a greater understanding of the process of learning at the organizational level.2. The types of unlearning can be investigated in detail using a longitudinal study. Specifically, the:* triggers or antecedents;* primary role of managers;* behavioral aspects of the practical (and unexpected) use; and* behavioral and operational consequences of each types of unlearning can be studied in case studies.3. Time processing orientation and agreement preferences should be investigated in unlearning process. Specifically, conflicts arose during changes in routines and beliefs, and resolution of those conflicts, majority and minority issues and power structures during the changes in beliefs and routines should be studied.4. How unlearning occurs should be investigated and empirically tested in groups, such as in new product development team. For instance, literature on new product development demonstrates that the groups that have strong memories are least able to deviate from previous routines during the new product development processes (Moorman and Miner, 1998). Therefore, eliminating group/team memory or unlearning, is critical for new product performance in turbulent environments/conditions and begs an empirical study. Also, empirically investigating unlearning in cross-functional new product development teams may illuminate researchers at the organizational level. Specifically, it has been asserted that new product development teams show the similar behavioral patterns as organizations do or represent microcosms of organizations based on the literature on the fractal geometry and images in system thinking (Capra, 1996). The basic philosophy of fractal images is that all characteristic patterns of the system are found repeatedly at the descending scales (Capra, 1996, p. 138). This is similar to holographic systems, which create processes where the whole can be encoded in all of the parts, so that "each and every part represents the whole" (Morgan, 1998, p. 71). In this sake, new product development team unlearning process empirically sheds light on the organizational unlearning process based on the functional isomorphism rather than homomorphic conceptualization of unlearning process, because NPD team members exist on more than one level simultaneously, act differently as units and influence each other across levels (Weiss, 1993, p. 278), and NPD process provides a more controllable, identifiable, and accurate unit definability (Meyers and Wilemon, 1986).
|
Present and correct: Understanding the impact of mindfulness on leadership
|
[
"Leadership development",
"Emotional intelligence",
"Mindfulness",
"Organizational leadership",
"Leadership performance"
] |
Summarize the following paper into structured abstract.
Review: When does a fad turn into a trend, and then an accepted habit or custom? We have all played our part in fad over the years, from looking after virtual pets (remember Tamagotchi, anyone?) to dancing the Macarena at some cousin's wedding. These all-consuming rushes of activities tend to define an era for people, and can often be resurrected with waves of nostalgia in the post-modern world we live in. From a business perspective, however, they can be a double-edged sword. If your product happens to be part of a craze, then it must be the most wild ride to keep pace with demand but also the most soul-destroying crash when the fad fades.
All in the mind: Some would argue that making employees feel good about working somewhere is a reward in itself for jumping on the bandwagon occasionally, but one wonders how often HR teams check the science of what they are about to change before making a leap of faith. It may be that there is none, or perhaps that whatever the fad is has an internal logic that makes sense and encourages people to go right ahead and adopt it. One fad that has arguably transitioned into a trend and is fast becoming a custom is mindfulness, and its adoption by executives as a way to improve their leadership skills has been widely reported in the mainstream media. But what is the science behind its perceived effectiveness?
Emerging themes: To look at the adoption of mindfulness, the author questioned over 40 people in leadership positions in a wide variety of organizations. Transcripts of the interactions were reviewed multiple times to identify key themes in the text to explain how the subjects felt about mindfulness and its effect on their leadership capabilities. The first overwhelming finding in the study was that every particpant felt that mindfulness had helped them improve as leaders. Indeed, when you look at the seven themes identified in the interactions with leaders, the following had near universal coverage:
Ways forward: The second finding relates to the interpretation of mindfulness benefits as competences, and how the seven themes identified in the textual analysis implied that a wide range of corporate and social competences were improved. These competences do have a strong record in academic research, and so the evidence that mindfulness can improve leadership skills does look solid. Finally, there was evidence that specific cognitive faculties were enhanced through the adoption of mindfulness, such as environmental observation and information gathering. The implications for leaders, then, of using mindfulness to improve their performance is that it is at the very least likely to enhance how they feel about themselves and their own leadership capabilities and very possibly how the rest of their organizations feel as well.
Comment: The article "Deconstructing the relationship between mindfulness and leader effectiveness" (2018) by Lippincott is an extremely positive study of mindfulness and should banish any cynics who do believe it is a fad. The evidence presented shows that not only can leadership performance be improved but the perceptions of leaders can also increase, and this in itself should persuade many leaders and HR functions to think about its use in orgainzational training.
|
Technology adoption for the integration of online-offline purchasing: Omnichannel strategies in the retail environment
|
[
"Technology and innovation management",
"Digital transformation",
"In-store technology",
"Omnichannel retailing",
"Pioneering strategy"
] |
Summarize the following paper into structured abstract.
Introduction: The current retail scenario is characterized by a huge consumers' demand of entertaining and effective shopping experiences. This implies an extension of traditional offers through innovative technologies, by maintaining the same quality of service and products across different channels through which consumers can search, compare, choose and purchase products, interacting with the brand (Neslin et al., 2006; Pantano and Viassone, 2015). Consumers seek consistent, positive experiences, where all channels have complete and accurate information about customers' history in order to earn their loyalty. (PricewaterhouseCoopers LLP and Kantar Retail LLC, 2012).
Theoretical background: From multi to omni-channel retailing
Research design: Our research investigates the impact of IST on the customer's shopping experience, in the perception of store managers and employees. In order to address these objectives, this work employs a qualitative strategy of inquiry, involving a data collection from a set of retailers who introduced innovative IST as first-movers in the market.
Results and discussion: By proceeding on our chain of evidence, the first step focused on the code frequency analysis.
Conclusions, limitations and future research directions: In the increasingly competitive retail scenario, the advancement of digital and mobile channels raises customer expectations for businesses to engage with them wherever, whenever and however. Retailers need to find innovative ways to connect with their audience and offer enriched shopping experiences and value propositions, making the motivation for an omnichannel strategy design ever more compelling.
|
The consumer's expectation formation process over time
|
[
"Service levels",
"Customer service management",
"Customers",
"Perception"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.
Introduction: The concept of consumer expectations has a rich theoretical and empirical history in the marketing literature. Expectations have been viewed as benchmarks consumers use to determine satisfaction (Cadotte et al., 1987; Churchill and Surprenant, 1982) or appraise performance of products and services (Boulding et al., 1993; Churchill and Surprenant, 1982; Parasuraman et al., 1985, 1988). Consumer expectations may be multi-level benchmarks used for evaluation or appraisal (Boulding et al., 1993; Hamer et al., 1999; Parasuraman et al., 1991; Zeithaml et al., 1993). Within the process, antecedents have a direct effect on the formation of expectations (Boulding et al., 1993; Clow, 1993; Tse and Wilton, 1988; Zeithaml et al., 1993). The final part of the process is the formation of consumer intentions to repurchase a product or service (based on comparisons of service performance to expectations) (Oliver, 1980; Woodruff et al., 1983).Empirical research on consumer expectations has yielded substantial results, and a few researchers have taken the arduous route of longitudinal studies or simulated longitudinal studies (Boulding et al., 1993; Clow et al., 1998; LaBarbera and Mazursky, 1983; Oliver and Burke, 1999). However, scant studies have examined the consumer expectations process over time (Hamer et al., 1999). It would seem logical that the expectations process might re-form over service/product usage (Tse and Wilton, 1988) or that expectations would be modified and adjusted as consumers' experiences with a product/service increase or new information about the product/service is received (Boulding et al., 1993; Miller, 1977). If expectations are viewed as the central part of a process that has antecedents and consequences, one might expect that all components of the process as well as the relationship between the process components could change over time.The purpose of this research is not to offer yet another piece of research on expectations alone, but to examine the expectation formation and reformation process over time using a cohort of consumers in a field setting. To the best of our knowledge, this is the first time this approach has been taken to examine the expectations process. By enlarging the focus of the research to include the expectation formation process, the results are intended to increase our understanding of consumers' expectations processes over time as well as to develop managerial strategies to influence the entire expectations process. The paper begins with a proposed longitudinal model. The model is grounded on the research and propositions of Boulding et al. (1993) and Zeithaml et al. (1993). The model is operationalized using qualitative research and tested on 440 consumers at three points in time over a year. In this case, students are the actual consumers and a university experience in the freshman year is the ongoing service encounter being. The findings are discussed, future research is suggested, and managerial implications are offered.
Model development: Figure 1 illustrates the proposed model, which includes the components of the expectation process: antecedents (enduring intensifiers and self-perceived service role), two levels of expectations ("should" and "will"), appraised performance, and purchase/repurchase intentions. Three time periods or purchase periods are noted. Figure 1 also illustrates the relationships between components taking place within each purchase period. Figure 2 illustrates the relationships between components taking place across each purchase period. Usually models in the published research represent a slice of time with a starting and ending point. Our model begins at T=0 (or Time=0), prior to the first service experience. It continues over two successive repurchase periods (T=1 and T=2, respectively). We assume that at time (T=0), a service purchase has been identified and the consumer intends to proceed with the service purchase.Expectations and their antecedents
Methodology: Analysis overview
Results: As described earlier, based on the qualitative research, the survey was developed to measure multiple dimensions of both expectations, which were also reflected in the appraised performance items. The key issues in our analyses involve relationships between these two different expectations and the performance measures. Since the key issues in our analyses involve relationships between the different expectations and performance, we decided to identify factors/dimensions that would remain stable across the three time periods. Using exploratory factor analysis, we identified multidimensional factors of "should" and "will" expectations and appraised performance that remained stable over time (see Table I). Both the enduring intensifiers and self-perceived service role were found to be unidimensional constructs. A list of items for each dimension and construct is available by request from the first author.Second-order factor analyses were conducted to test whether the three dimensions of "will" and "should" expectations and performance map onto corresponding higher-order constructs. For all three time periods, the results indicated that the three dimensions map onto a higher order factor with the loadings of the three dimensions statistically significant at the 5 percent level. Therefore, it was decided to form overall scale scores for "will" and "should" expectations and performance by averaging all the items across the three dimensions. For the unidimensional constructs, scale scores were also constructed by averaging scores on the corresponding items. Table I reports the means as well as the reliability coefficients for each construct.Analysis of relationships within each purchase period
Discussion: Our research developed a model of the expectation formation process over time by integrating the literature on multiple levels of expectations, the antecedents of the expectation levels, and the influence of expectation levels on service performance and purchase/repurchase intentions. A cohort of 440 consumers was surveyed over three purchase periods to test the proposed model. To the best of our knowledge, this is the first time that a comprehensive model of the expectations process has been field-tested on a cohort of consumers over multiple purchase periods.Results of the research fully supported five out of seven hypotheses and partially supported two hypotheses. Over three purchase periods, consumers maintained two significantly different expectation levels, a higher normative level and a lower predictive level (as suggested by Boulding et al., 1993). As hypothesized, the lower predictive level was less stable than the higher normative level over the three purchase periods.Looking at the formation of the expectation levels, one hypothesis suggested that future expectations would be a function of previous expectations and the current appraisal of the service's performance. Although this held for the lower, predictive level of expectations, the higher, normative level was influenced only by the higher, normative expectations from the prior purchase period. Antecedents were also tested to determine their roles in forming both levels of expectations. Zeithaml et al. (1993) proposed that the enduring intensifier antecedent would only influence the upper level (our "should"), while the consumer's self-perceived service role would only influence the lower level (our "will") expectations. We found that both antecedents influenced the two levels of expectations - a new contribution to the expectation formation process research. With time and purchase experience, the enduring intensifiers strengthen their influence on "will" expectations while their influence on "should" expectations wanes. The consumer's self-perceived service role influence on both levels of expectations weakens over time. Although the relative influence of the antecedents on expectations changes over time, our findings indicate the two antecedents remain significant influencers of both levels of expectations. However, the finding that the variance explained for the two levels of expectations waned over time and purchase experience may indicate that there are other influencers or antecedents not measured in this study that become important as the consumer continues to repurchase the service (i.e. advertising, awareness of other alternatives, changes in tastes/preferences, etc.)The research found that the influence of the expectation levels on appraised performance varies over time with the effect of both levels of expectations decreasing over time. Two other findings were consistent over all purchase periods. First, purchase intentions were directly influenced by appraisals of the service performance. Second, purchase/repurchase intentions influenced both expectation antecedents of enduring intensifiers and the consumer's self-perceived service role.One interesting finding that was not hypothesized has to do with comparing the performance appraisal mean score to the mean scores for both levels of expectations. The "Zone of Tolerance" (Zeithaml et al., 1993), comprised of an upper level and lower tolerable level of expectations, was a target between which service evaluations are expected to fall in order for repurchase to take place. The current research found that for both purchase periods (T=1 and T=2), the average appraised performance scores (5.29 and 5.22, respectively) did not fall between the upper "should" expectations (6.38 for T=1 and 6.43 for T=2) and the lower "will" expectations (5.83 for T=1 and 5.80 for T=2). This may mean that "should" and "will" expectations may not be identical to the desired and adequate expectations as conceptualized by Zeithaml et al. (1993) or there is the possibility of more than two levels of expectations (as suggested by Miller, 1977). Another explanation is that the process may not be as simple as evaluations falling neatly between expectation levels. There might be other reasons as well for repurchase, outside the performance versus expectations comparison process. Finally, it could be, as suggested by Oliver (1996), that these expectations acted as contrast agents, resulting in exaggerated performance appraisals (in this case much worse than expected). Further analyses regarding this finding goes beyond the scope of this paper but may provide future research possibilities.Limitations and future research
|
The importance of confidence in leadership role: A qualitative study of the process following two Swedish leadership programmes
|
[
"Reflection",
"Confidence",
"Leadership development",
"Developmental leadership (DL)",
"Leadership programme",
"Understanding Group and Leader (UGL)"
] |
Summarize the following paper into structured abstract.
Introduction: As stated by the Swedish trade union for leaders (Ledarna, 2014) half a million people in Sweden hold leadership positions, 50 per cent of whom find the work mentally demanding and 40 per cent experience not having enough time to fulfil their leadership responsibilities towards their employees (Ledarna, 2014). If given a choice the Swedish leaders would like to spend less time on administration and more on development of their leadership skills (Ledarna, 2014). These numbers indicate an importance of offering leadership development programmes which are effective both in time and in improving leaders' skills in relation to employees.
Methods: Study setting
Findings: The emerging model
Discussion: The aim of this study is to understand UGL's and DL's influence on leadership and co-workers, as well as which mechanisms are involved.
Conclusion: This study contributes to an understanding of the effects of UGL and DL, which have not been studied before. The model presented identifies a number of potentially important psychological and behavioural aspects where increased confidence in leadership role is crucial for employee satisfaction, independent of gender. On the other hand, when confidence in own leadership role weakens the impact is likely to result in employee dissatisfaction. Thus, the programmes influence intra-psychological as well as overt behavioural aspects. Where an increase in overt leadership skills seems to be regarded as genuine by the employees if it is backed up by confidence in leadership role. Further research is needed to evaluate the accuracy of this model and to inform existing leadership theories.
Practical implications: Confidence in leadership role seems important for having positive outcomes of leadership. Although this needs further research, it is something organisations should consider when working with leadership questions.
|
"Fool me once, ...": deception, morality and self-regeneration in decentralized markets
|
[
"Markets",
"Deception",
"Morality",
"Purchasing decisions",
"Preferences",
"Rumour spreading"
] |
Summarize the following paper into structured abstract.
Introduction: Markets are one of the most robust forces in nature and society. Even after the most calamitous tragedy, market relations will recover with an exceptional vitality as individuals engage again in trading relations that are crucial for their survival and wellbeing. To gain the confidence of the other players, market participants must avoid dishonest and deceiving attitudes, and therefore, one might infer that stable and long-lasting market equilibrium is inseparable from morality and honesty. Nevertheless, as insistently pointed out by non-orthodox thinking in economics, equilibrium is more an exception than the rule, implying that in many market circumstances, manipulation and deception might pay off. The aim of this paper is to discuss first through a brief literature review and second by assembling a straightforward simulation model, the ability markets have to self-regenerate, albeit they are systematically hit by individual behaviour that disrupts confidence and trustworthiness.
Markets: dangerous arenas or cooperation forums?: To reflect on the intrinsic and general properties of markets is a first fundamental step in the effort to understand how they benefit or penalize those who engage in exchange relations. An influential perspective on the nature of markets is Friedrich von Hayek's view about spontaneous orders. Bowles et al. (2017) reminisce that Hayek interpreted markets as complex systems where individuals endowed with a limited knowledge on the structure and dynamics of the market would compete to serve their own interests, allowing for the emergence of a coherent whole. In this view, markets need no regulation because free enterprise would be the best form to coordinate individual actions in a large scale.
Drivers of individual purchasing decisions: price and variety matching: Markets are complex systems with multiple heterogeneous agents engaging in systematic interactions. Thus, as suggested by Bowles et al. (2017), agent-based computational models might constitute an adequate framework to approach market dynamics, when one wants to account for deceit, morality, gullibility, sympathy and related phenomena. Zhang and Zhang (2007) propose a model of this kind, where a motivation function for purchasing a variety of a good is presented. Motivation to buy depends on the price, on the preference for a given variety, on advertising and on the influence exerted by other consumers. Price and a matching with the intended variety are the structural factors leading an agent to acquire some good from a given firm, suggestion or manipulation through advertising or other factors might distort the expected outcome. Once eventual deceit is perceived, consumers and other market players may penalize the firms that do not follow a correct conduct both directly and by influencing the decisions of other consumers.
Not so fair play: business manipulation and deception: So far, by maintaining productivity, preferences and production technologies constant, our analysis of the motivation to buy is static. We now introduce dynamics by attaching the main idea of our analysis: the possibility of manipulation by firms and, posteriorly, moral behaviour by consumers.
Morality strikes back: consumers spread the word: Besides the direct impact analysed in the previous section, the deceiving behaviour of firm j might have reputational damages as well. Imagine that a single consumer considers that the deceiving behaviour of firm j is, in a first moment, worth being denunciated. The disapproval on the deceiving behaviour will then be eventually passed on to the other consumers, implying a transitory phase of pervasive resentment toward the firm with a less honourable behaviour. The action of consumers in this context is solely driven by the empathy toward others; this action contains no direct economic gain, and it is grounded in strict moral principles (not wanting others to suffer from the firm's misbehaviour that the consumer that spreads the word had to endure).
Conclusion: Markets are complex entities where agents endowed with distinct capabilities, preferences, moral codes and views of the world co-exist and co-evolve. To capture such heterogeneity in a straightforward, comprehensive and manageable analytical framework is a difficult and cumbersome task. In this paper, market relations were approached focussing the attention on deceiving behaviour, morality and the capacity markets have to self-regenerate after being disturbed by eventually opportunistic and deviant behaviour of some of the market participants.
|
Employees master the Nuances of travel retailing: New learning system helps staff to stay up to date
|
[
"Retailing",
"Employee development",
"Travel industry"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: Traditionally, businesses in the airport retail sector experience a high degree of labour turnover - or churn. Moreover, although similar in many ways to the rest of the retail sector, airport retail is a more pressured environment because customers who, by definition, are airline passengers, have relatively little time to browse and buy. Consequently, sales have to be transacted as speedily as possible.
Keeping staff up to speed: Taking all of these things into account, the human resources (HR) department at Nuance - which operates retail outlets spanning more than 60 locations in 18 countries and territories across the globe - has a major, on-going responsibility to keep the company's 6,100-plus staff up-to-speed in terms of specialized sales techniques that are appropriate for this sector, knowledge of an extensive range of SKUs as well as complex customs issues. It needs to do this for a workforce with a high degree of churn.
A keen challenge: Nuance's HR department felt that it faced a particularly keen challenge in ensuring that Nuance staff around the world could use SAP correctly.
The principal benefit: According to Nuance's HR manager: "The principal benefit of the current Docebo-enabled system is that it centralizes knowledge within the company and makes it available as and when it is required. It provides all the basic knowledge a newly hired person needs in order to operate effectively - and productively. It also offers all staff solid training relating to customs regulations around the world as well as how to use SAP transactions in business units including buying, logistics, customs, airport sales operations and finance.
Wholesale and distribution business: Nuance serves more than 31 million travelling customers a year. In addition to an extensive portfolio of duty and tax-free stores, brand boutiques and concept stores - covering some 73,000 square metres - Nuance provides in-flight services and operates a wholesale and distribution business, supporting the travel-retail sector.
|
Management systems: integration or addition?
|
[
"Management systems",
"Integration",
"Management techniques",
"Case studies"
] |
Summarize the following paper into structured abstract.
Introduction: A management system may be defined as a set of inter-related organizational processes, sharing resources to achieve several organizational goals. In this context an organizational management system includes planning, product/service realization, monitoring and improvement activities. According to Karapetrovic et al. (2006) management systems are supported on two main basic principles: systematization and responsibility accountability. A management system implementation does not require, but should enhance, an organizational performance minimum level and a predefined goal achievement. It establishes the need of organizational processes systematization and formalization related with the different business areas.Benefits of management systems implementation have been highlighted in numerous reported papers. Bottani et al. (2009) reported that companies adopting a safety management system exhibit a higher overall performance. Dordevic et al. (2010), in a study among Serbian small and medium enterprises (SMEs), pointed out that the enhancement of the enterprise overall features, the creation of frameworks for implementation of recognized standards for management systems and the creation of an integrated scheme for independent controls of integrated management systems (IMS) had a major impact on the development of IMS worldwide. Despite of this, not ever management system implementation is conducted conscientiously leading to the common criticism related to the bureaucratic load increase, organizational stiffness (Seddon, 2000) and intra-organizational ghettos.Organizational integration definition has been a quest in the last years. Garvin (1991) related integration to a measure of the alignment or harmony in an organization and, later, MacGregor Associates (1996) defined it as a single top level management "core" standard with optional modular supporting standards covering specific requirements. Labodova (2004) and Suditu (2007) implicitly related integration concept to sustainability and sustainable development, namely economic, when they stated that a sustainable organization is characterized by a minimum environmental impact, economic viability with policies and vision focusing on continual improvement preventive approach (Labodova, 2004; Suditu, 2007; Jorgensen et al., 2006). Griffith (2000) stated that IMS blend together quality, environmental and health and safety procedures in order to demonstrate externally the company commitment to deliver a product or service, improve environmental performance and better health and safety management. Suditu (2007) described an IMS as the organizational structure, resources and procedures that supports the planning, monitoring, quality control, safety and environmental activities of an organization. Table I shows the main reported organizational integration definitions.Management systems integration has been linked as a potential enabler of other concepts. Oskarsson and Malmborg (2005) reported how management systems integration may be understood as the organization feedback to the challenge presented by sustainable development. Later, Rocha et al. (2007), reported how to insert the sustainable development concept in implemented management systems. In this paper, the authors proposed a model aiming the management systems implementation supported on the sustainable development concept. Furthermore, Meyer et al. (2008) stated that the health and safety promotion among employees should be performed on a process perspective, under an integrated approach.Generically, an IMS is a blend of two or more management subsystems under a holistic approach arising organizational interactions (Okrapilov, 2010). Recently, some authors stated that management systems standards suitable for integration are ruled by a risk identification approach (for the product/service quality, environment or health and safety) assuring control procedures to manage those risks, which place the risk concept as a possible integrator or pivot factor of the integration management system implementation (Noy and Ellis, 2003; Labodova, 2004; Williams et al., 2006; Grosskopft et al., 2007; Nitu and Nitu, 2011). For instance, a decrease of non-conformities could be understood as:* The losses risk decrease due to reworks or scrap.* The under rated quality products risk decrease to be shipped to the customer.* The environmental risk decrease (raw material and energy consumption decrease and residues production decrease).* The heath and safety related risk decrease (as processes are better understood the accidents probability decrease).* Risk to customers decrease (non conformity products are a probable cause for customers accidents).Additionally, Jorgensen (2008) linked the sustainable management systems concept to life cycle management and integration.In our days, there are not so many quantitative and objective data related to management systems integration, comparing with the non-IMS available data. Due to this fact many questions remain unanswered:* Did integration fulfil the organizations expectations?* What were those expectations/motivations?* What is the best path/approach for integration?* Does integration truly increase management systems effectiveness?* Which are the most suitable subsystems for integration?* Is integration just the sum of subsystems procedures or it represents some additional added-value?In this paper we will try to give some answers to the previous set of questions, thus providing what we believe to be an important contribution in this field.
IMS literature review: Management systems certification worldwide overview
Research methodology: According to Sampaio et al. (2009) the majority of ISO 9001 certification research studies conducted so far are supported by survey methodologies and descriptive statistics. As such, they express conclusions that are mainly derived from opinions and perceptions about the subject. Thus, it is common to find in the open literature references that point out the highly subjective results derived from such studies (often of somewhat contradictory nature). Thus, in order to avoid some of these issues, the research methodology used to conduct this study was case-based. However, we would like to point out that we weren't able to perform a significant number of case studies, which is one of the research main limitations.Case and field research studies continue to be rarely published in operations management (Meredith, 1998). The case study is a research strategy which focuses on understanding the dynamics present within single settings (Eisenhardt, 1989). According to Voss et al. (2002) case research has consistently been one of the most powerful research methods, mainly in the development of new theory. The research base on analysis of a limited number of cases is widely used in Europe but is less common in North American research teams (Drejer et al., 1998).Case studies typically combine data collection methods and observations. The evidence may be qualitative, quantitative or both (Eisenhardt, 1989). According to Meredith (1998), the case research methods, if combined with rationalist methods, can offer even greater potential for enhancing new theories than either method alone. Rationalism generally employs quantitative methodologies to describe or explain phenomena and includes optimization models, simulation modeling, survey methodology and laboratory experiments. On the other side, case and field study is one example of an alternative research paradigm and uses both quantitative and qualitative methodologies to help understand phenomena. Case/field research methods are useful for situations that require a deeper understanding of what is happening to modify or extend current theory. A case study typically uses multiple methods and tools for data collection from a number of entities by a direct observer in a single, natural setting that considers temporal and contextual aspects of the phenomena under study.According to Eisenhardt (1989) the single case is particularly appropriate for completely new and exploratory investigations. Regarding the multiple case study of two to eight situations, this methodology is appropriate when there is some knowledge about the phenomenon but much is still unknown. In this paper we used this second methodology, because, in our opinion, there is a lot of research related to the management systems integration. Increasing the number of units further into low teens brings us to a situation where both statistical and case methods are equally applicable. According to Boyer and McDermott (1999) and McLachlin (1997) the number of cases suggested to test a theory already proposed ranges from six to seven. Additionally, Voss et al. (2002) stated that the fewer the case studies, the greater the opportunity for depth of observation.Knowledge of how operations systems work can be enhanced significantly through contact with the "real-world" conditions that operations management models seek to describe (McCutcheon and Meridith, 1993). According to the authors, case study research is a primary means of exploring field conditions. However, regardless of their purposes, case study research needs to be conducted in a manner that assures maximum measurement reliability and theory validity. The results of case study research can have very high impact, because they can lead to new and creative insights, development of new theory and have a high validity with practitioners (Drejer et al., 1998).Sample selection
Case studies - how companies are integrating their management systems?: This section presents the results and analyses that have resulted from the case studies performed. Table II shows some of conclusions reached both for the "high integration level organizations - HILO" and for the "low integration level organizations - LILO", that will be further discussed in this section.As is shown in Table II different chronological paths are possible when implementing an IMS. Both the two high level integration organizations and the low level integration one do present different temporal milestones in order to achieve integration. Regarding the high integration level organizations, for Company 2 the quality management system was the "quarterback" and the environmental and safety management systems only emerged later on, following the chronological order of the standards publication. On the other hand, Company 1 had simultaneously integrated quality, environmental and health and safety subsystems. The company with a low integration level had begun the integration process by integrating the documental procedures, but maintaining the three management subsystems operating simultaneously.Internal motivations should drive organization sin order to achieve the management systems integration. This category of motivations was present in those companies that are really committed with the continuous improvement philosophy and that integrated their management systems in order to effectively increase their organizational performance (Companies 1 and 2). Surprisingly, the company with a low level of integration did also present integration motivations that were of internal nature (Company 3) - internal organization and costs reduction.In the organizations that achieved an effective integration two different sequences were identified: a simultaneously implementation/integration of the three subsystems (Company 1) and a sequential integration of quality, environmental and health and safety management subsystems (Company 2). Concerning the low integration level company, the first step through integration had begun in 2007, simultaneously, for the three subsystems, but only at a documental level.Two different integration strategies have been pointed out by the organizations with a high integration level. On one hand, one of the companies had been supported by a consulting firm with knowledge and competence in the three management subsystems (Company 1). This organization had pointed out that the involvement of the consultancy company had been crucial for the integration process success. For the other company, the previous quality management system implementation and consolidation had a significant positive impact for the success of the integration process (Company 2). According to this company the quality management system existence allows the organization to adapt the methodologies and tools already used and consolidated to the newer management subsystems. The revision of the documental system, in order to verify which of the documents were considered common to the three management subsystems, was the integration strategy followed by the low integration level company. The company goal was to merge the common procedures into single documents, thus reflecting the documental integration that had been mentioned previously.As is shown in Table II, for Company 2 the high compatibility between ISO 14001 and OHSAS 18001 standards promotes the integration process between these two management systems. On the other hand, the existence of different organizational structures for the systems management increases the level of difficulty of the integration process. Furthermore, for this company there are management subsystems specificities that were not suitable for integration and thus that create barriers for the systems integration process. Companies 1 and 3 did not report major integration difficulties.Based on the case studies performed we were able to identify different levels of management system integration. Furthermore, we would like to point out that Company 2 stated some difficulties concerning the quality management system integration with the environmental and health and safety ones.Companies 1 and 2 reported significant internal organizational improvements as a result of the management system integration. By the other side, for the organization with a low integration level the benefits were very limited and only of documental nature. Thus, the motivations and the organization involvement in the integration process are crucial for the resultant benefits.The organizations with a high level of integration pointed out that its organizational performance would be inferior if the integration did not occur, namely at resources optimization. Additionally, Company 1 stated that the performance would be inferior because the company was less internally organized. Concerning Company 3, the firm performance would be similar because there would still be separated functional areas (quality, environment and health and safety) with different processes and departments.According to the surveyed companies, ISO 14001 and OHSAS 18001 standards are easier to integrate than the integration of those standards with the ISO 9001 one. On the other hand, the existence of different departments for each functional area increases the integration process complexity.
Conclusions: Management systems worldwide diffusion raises a set of opportunities, namely the worldwide experience and knowledge about management systems, which promotes the implementation and certification diffusion of new international standards. Nevertheless, the systems implementation may not be a management subsystems addition, but should be supported in effective management subsystems integration.Companies should only adopt those standards that are really important, necessary and have added value for the organization processes. Additionally, when the companies' strategy is to implement more than one management system, there is a clear advantage of doing it supported on an integrated approach, avoiding the development of organizational "islands" related to each subsystem. This organizational "archipelago" structure is completely far way from any global optimized solution, based on a holistic perspective.An IMS implementation should not be taken lightly. A careful pre-planned design should be carried out in order to the final result maximize the benefits and minimize unwanted outputs. Several requirements should be considered before, during and after an integration process. Top management commitment, resources availability, communication, and integrated training across the organization, integrated audits, technical guidelines, customers, employees and certification entities support are among those requirements. The organization complexity and the closeness between environmental and health and safety aspects with the organization core business are key-parameters for the integration success. Additionally, the integration process should take in account the organizational policy, the management style, the implemented subsystems and related systematic management, the corporation image, the market position, the organization size and the available resources.Our research has some important limitations concerning the methodology. Methodologically we would like to point out the case studies small sample size. In the case-based research, as in other types of research methodologies, we need a sample size large enough to make some inferences and generalizations. Thus, our conclusions are very limited to our sampled companies, but we think that the real worldwide scenario is not too much different from the one which we present in this paper.The following general main conclusions arise from the research conducted:* Several chronological paths and sequences could be followed in order to implement an IMS. The adopted path, per se, does not restrict the integration level that would be achieved. On this subject, results arise from this study are in accordance with those reported on the literature.* The resources optimization, the definition of an integrated management approach and the costs reduction are the main reasons that lead organizations to integrate their management subsystems.* The quality management system could be the foundation for the integration of other management systems.* The EMS and the OHSMS are easier to integrate with each other than with the QMS.* HILO perceived management system integration as an added value for the organization, reporting that its organizational performance would be less efficient if integration did not take place.Based on all information collected and analysed as a result of the case studies conducted, we are able to propose at this point the following four evolution levels in the path of the full management system integration:* Level I. Documentation integration.* Level II. Management tools integration.* Level III. Common policies and goals.* Level IV. Common organizational structure.Based on the previous proposed framework, Company 3 is in the first level of the integration process - documental one. According to the previous integration levels, this company will evolve to the integration of management tools, followed by the definition of a common policy and goals, ending in a common organizational structure. Companies 1 and 2 are in Level IV, which corresponds to a management system fully integrated. However, it is important to point out that could exist different levels of integration for different organization functional areas.Finally, we would like to propose a set of recommendations that could be useful for those organizations that will evolve to the integration of their management system in order to do it based on an effective and efficient way:* Adopt a strategy supported in the processes and system approach.* Avoid the existence of internal organizational silos.* Adopt a unique and integrated vision, with partial divisions according to each management subsystem implemented.* Adopt a careful pre-planned "design" that promotes flexibility.* Assure that there is a top management commitment.* Assure that there is resources availability.* Enhance communication.* Implement integrated training activities.* Performed internal integrated audits.* Enhance the continuous improvement approach.* Enhance external communication among stakeholders.Furthermore, we could state that an IMS should be supported in:* True and sustainable top management commitment.* Stakeholders' involvement during the integration process.* Organizational areas clearly identified.* Risk assessment approach.* Policies, programs and procedures systematization.* Management activities should be integrated in the organizational planning.* Top management vision as an IMS.
|
The use of national registries data in three European countries in order to improve health care quality
|
[
"Quality improvement",
"National Health Service",
"Benchmarking",
"Sweden",
"Portugal",
"United Kingdom"
] |
Summarize the following paper into structured abstract.
Introduction: Quality of care is considered a multidimensional concept that has been given different meanings and definitions in the literature, all over time.There was a time, not so long ago, when quality could be defined by saying "I know it when I see it". Not today. The public is concerned. They want to know that the medical care they receive is safe, effective, and accessible to them (Marshall, 2001).In today's world, the rapid diffusion of information, the growing level of knowledge and the greater requirements of patients, the strong financial constraints and the need to introduce criteria and quality/performance indicators in the health care given, have contributed to change some of the dynamics of health institutions (Biscaia, 2002; World Health Organization, 2003). These dynamics have evolved in the direction of giving greater value to the collection and treatment of credible standardized information thus making possible the evaluation and monitoring of services in terms of the volume of activity and results achieved (Weitraub et al., 1997; Cheng and Song, 2004; Cavalli et al., 2004).There is an emphasis on patient-centred care in most health systems in Europe. The consumer of today is more informed and demanding than ever, and calls for a description of the recommended treatment and its advantages and risks. For that reason there is a pressing and increasing need for information.Several authors highlight this issue, the increasing need for information, in different perspectives: patient perspective, in order to make informed choices, for instance; professional perspective, to measure and improve clinical and economic costs, and to help to develop performance and quality indicators, and political perspective, to compare performances and results among providers, and to plan health care, based on solid knowledge of needs and demands, and to draw effectiveness strategies based on either trends of the population characteristics and of the health care delivered (Larsson et al., 2005).Additionally, the publication of such information could drive up the overall quality of care.Public reporting of comparative information on health care quality of physicians, hospitals, and health plans through "report cards" is hailed as a plausible way to improve health care. Without publicly reported comparative information on health care quality, patients may choose their physicians based on more measurable characteristics, such as cost, or by word-of-mouth or other informal referral practice not obviously related to their needs (Werner and Asch, 2005).Reporting quality information publicly is presumed to motivate quality improvement through two main mechanisms. First, public quality information allows patients, referring physicians, and health care purchasers to preferentially select high quality services (physicians and institutions). Second, public report cards may motivate physicians to compete on quality and, by providing feedback and by identifying areas for quality improvement initiatives, help physicians to do so (Werner and Asch, 2005; Spiegelhalter, 2005).We tend to think ourselves unusually enlightened in examining outcomes of care. In fact, historical precedents for this are noteworthy, not only because of the compilation and comparison of outcomes and other data but also because of vigorous efforts to discover the causes of variations and use this knowledge to improve care (Iezzoni, 2003).For example, English hospitals, which were primarily charitable institutions serving the poor, had independent accumulated patient statistics since the 1600s. For centuries, Great Britain gathered data on population death rates, primarily to track epidemic illness and later as a mean of encouraging new subscribers and donation to the hospitals (Iezzoni, 2003).In 1863, Florence Nightingale published the third edition of her book entitled "Notes on Hospital", recommending fundamental changes in the configuration, location, and operation of hospital, as well highlight the role of collecting patient data, to reduce deaths caused by unsanitary conditions. Nightingale continued to argue that compiling and disseminating patient data and outcome statistics for hospitals were critical to understanding and improving quality of care (Iezzoni, 2003).In the last decade, the development of new policy orientations, such as the demand for accountability and quality improvement strategies, or a growing interest in patient satisfaction assessment, are incentives for developing, throughout the world, health care registries on a local, regional, national, or international level.There is a strong commitment today to quality issues, including support in establishing system for continuous follow-up of quality and results. It is emphasised in this paper that supply of information and follow-up of activities in health care should be strengthened so that the public receive good information about care and so that efficient health care is promoted (Ovretveit, 2003).Conditions in health care are changing constantly. New methods of investigation and treatment affect the structure, contents, quality and results of the care provided. Quality indicators are important so that this process of change can be discerned and must be capable of being reviewed and completely replaced.Quality indicators can be used for internal and/or external reasons. Internal reasons are related to the various management functions of the hospitals as a health service delivery organization and the indicators are used as management information to monitor, evaluate and improve the functions in the short- or in the long-term (strategy). External reasons are related to accountability questions asked by other stakeholders such as the financier (insurer, state, or other), patients/consumers and the public at large (World Health Organization, 2003).The purpose of this paper is to describe and highlight the role of health care registries and the use of quality/performance indicators, in three European countries, Portugal, Sweden and the UK, in order to improve patient care.
Portugal - first steps in the right direction ...: Good care, of high quality and on equal terms for the whole population, is the ultimate quality goal for all health care and medical services. There is a need for systems that support planning, implementing, following and continuously developing quality in activities.The establishment and expansion of national registries in Portugal, could be seen as a response to rapid changes in society and the health services, as well as to increasing demands for improvements when it comes to patient focus, effectiveness and efficiency (Observatorio Portugues dos Sistemas de Saude, 2003).In the last decade, we assist an ambitious reform to increase efficiency and improve quality of health care system in Portugal. The need to improve the health care system has been clearly identified by the authorities for several years but attempts of reform did not survive the political cycle and were never fully implemented.A comprehensive reform of the health care system was undertaken in 2002. According the report of OECD (2004, p. 16), "In contrast with past reform programmes, which were rather gradual, the strategy now was to create a big ban in health sector, making efforts essentially irreversible." The reform has two main aims: to deliver better-quality public health services than at present but at no higher cost; and to reduce the underlying growth rate of public health care spending over the medium term.New legislation has been approved separating functions of regulation, financing and provision of health care, setting up new models of financing and management for the hospitals, and introducing incentives towards productivity and quality improvement.In addition, the authorities have been preparing a ten-year framework aimed at continuing to improve the health status of the population, by integrating the health strategic factors of health that are not linked to the health care system and defining quality indicators which allow us to measure quality (OECD, 2004).It is consensus among all stakeholders that this kind of reforms needs a good information system to monitor and evaluate the results.Besides that, and according the same report, in Portugal "Quality control was absent. There were no standardized information systems that could have enabled the monitoring of the performance of managers and institutions" (OECD, 2004, p. 24).In our opinion this is the most important barrier, at the same time, the biggest challenge in the short time, for Portuguese authorities; develop an integrate and homogeneous data system for all health care institutions which allows comparisons and share clinical and administrative information among the system. The reality now, in the major part of the health care institutions in Portugal is that there are a set of databases without connection and non-communicable between them, which means that we have lots of information diffuse, sometimes duplicate, but with a poor reliability and utility.Although, we have some good examples, in the clinical field, with the implementation and management of registries, on regional and national level, like the oncology and cardiovascular area.The regional oncology registry is community-based registry, with clinical and administrative data, which allows monitoring the performance indicators of the prevention programmes in this area, and to assessment quality, in the efficiency and efficacy dimensions, of those programmes. Recently, it was implemented the INetROR, which is an important IT tool, like an intranet site, where all participants could access, any time, to send data, share information and to compare results (Observatorio Portugues de Sistemas de Saude, 2004).In the area of cardiology are four National Registries: Acute Coronary Syndrome (Ferreira et al., 2004) Percutaneous Coronary Intervention (Pereira et al., 2004), Clinical Electrophisiology (Morais et al., 2005), and the Cardiac Surgery National Survey (Uva et al., 2003). All these Registries collect clinical and administrative data of all procedures performed.Systematic registration of data from clinical practice in cardiology using local, national and international registries has assumed increasing importance for quality assurance in the management of cardiovascular disease throughout Europe.In 2005 the Ministries of Health of all EU Members state accepted the Cardiology Audit and Registration Data Standards (CARDS) project, which is a minimum core standard data set with definitions and coding for each of the three modules of cardiovascular health information systems: ACS, PCI and Clinical Electrophisiology.In the beginning of 2005 systematic registration of data, from the clinical practice settings of these three modules, especially in the PCI registry, have used the CARDS standards witch ensure that comparable data will be collected throughout Europe.With this methodology it's possible to define quality indicators and, consequent, assess quality and improve cardiac care in Europe, based on large populations and international multicenter studies.
Sweden - the masterpiece of registries and quality culture ...: A strong ambition of politicians in Sweden is to improve quality of care, strengthen the position of the patients, and offer them freedom of choice within specified limits, mostly due to economic constraints.The different professional organizations in the Swedish health care system have, in recent years, done extensive work to develop models for quality improvement. As an important complement to these directions, a system of national quality registries has been established in the Swedish health and medical services in the last 15 years or so (National Health Care Quality Registries Report, 1999).The registries contain individual-based data on diagnoses, treatment and outcomes. Statistical compilations are made at an aggregate level and are presented both for each department and for the country as a whole. The registries provide a unique means of promoting and monitoring quality improvement efforts in the Swedish health service.Today, there are over 50 voluntary national health quality registries, which either have achieved or are in the process of achieving nationwide coverage. They were started up by representatives of the medical profession and established to support efforts to improve quality. Their purpose is to support learning and development and they are not intended for supervisory or similar purpose. The registry managers are distributed among a variety of hospital departments administered by many different health authorities. In most cases, the development from a local to a national registry has taken place gradually.The Federation of Swedish County Councils, the National Board of Health and Welfare (NBHW), and the Swedish Society of Medicine collaborate at the national level in providing financial and other kind of active support for the creation and development of the national quality registries.Since 1990 resources have been allocated within the framework of the "Dagmar Agreement" between the Government and the health authorities to support the development and operation of the registries (Synnerman, 2000).The establishment of a national registry is a result of a consensus in the medical speciality concerned on important concepts and quality indicators and a conviction that the registry provides a quality measurement tool based on these indicators. These tools may be developed and refined from year-to-year. These national registries cover different diagnoses and treatments. Each quality register has chosen a number of quality indicators concerning procedure and outcome data important for its own objectives. The intention of using these registries is to make comparisons over time, among hospitals, and with national results - Benchmarking.The contemporary policy of quality improvement is based on a directive from the NBHW (the Government's central expert and supervisory authority for the social, health and medical services in Sweden) with the title "Quality Improvement Systems for Health Care and Medical Services". These directions require that the patient's needs and expectations, a well as all health care, should be addressed by systems for planning, implementing, evaluating and improving the quality of the health services provided (National Board Health and Welfare, 2001).By combining performance indicators from national or regional registries together with experienced quality-of-health patient data, the processes can be quality-assured from different perspectives. For example a model comprising of data from the national hip-fracture registry, the cost-per-patient registry and DRG registry together with health-profile-data, makes it possible to quality assure the hip-fracture process out of four perspectives namely: Functional health-status, The clinical perspective, Patient satisfaction and Health economy.This quality improvement tool is called the clinical value compass, named to reflect its similarity in layout to a directional compass, has at its four cardinal points. These points refer to:1. functional status, risk status, and wellbeing;2. costs;3. satisfaction with healthcare and perceived benefit (using the eq-5d instrument); and4. clinical outcomes.To manage and improve the value of health care services, providers will need to measure the value of care for similar patient populations, analyse the internal delivery process, and determine if these changes lead to better outcomes and lower costs (Nelson et al., 1996; Swedish National Hip Arthroplasty Register, 2005).Unlike a traditional compass, the points on the clinical value compass are not used to navigate in one particular direction versus another. Rather, the compass as a whole serves as a guide to maintain perspective on the entire care process. A specific improvement initiative can focus on one quadrant of the compass, the clinical outcomes for instance, however, the overall project must consider all four quadrants, and analyse the health care process as a whole (Stegmayr et al., 2003).The National Health Registries have attracted great international attention, and represent a unique resource, in quality improvement perspective, for the Swedish Health and Medical Services.
United Kingdom - moving fast toward the key point ...: Florence Nightingale was one of the first in the UK to promote the collection, statistical analysis and public release of institutional surgical outcome data. When she published her league tables of London Hospitals in the mid 19th century she received acclaim in some quarters but was ostracised in others (Spiegelhalter, 1999).Times have changed, however. Medical science and technology are advancing. Improvements in information technology have seen an explosion in the amount of medical information available to all citizens through a multitude of sources. Combined with growing concern over clinical and administrative standards in the wider National Health Service, open benchmarking of clinical outcomes and institutional performance became a high profile issue contributing to the introduction of the concept of clinical governance outlined in the white paper A First Class Service published by Department of Health (1997). This document set out a package of proposals to support the delivery of more consistent and higher quality care to patients. The aim was to drive performance improvement by setting measurable national standards, through National Service Frameworks and the National Institute for Clinical Excellence and providing an environment for improving local clinical care through clinical governance. This would be underpinned by improved professional self-regulation and development and monitoring of standards through the Commission for Health Improvement (CHI), the NHS Performance Assessment Framework and the National Survey of Patient and User Experience.These were wide ranging proposals and represented the first attempt to understand and measure the quality of service offered by the NHS since its inception 50 years previously - a remarkable deficiency of the biggest organization in the UK.The Society of Cardiothoracic Surgeons has a 25-year history of voluntary data collection and analysis. Its most recent incarnation is the UK Cardiac Surgical Register. The Society established the Register in 1977 to collect activity and mortality data on all cardiac surgical procedures performed in each NHS cardiac surgical centre, amounting now to 35,000 procedures a year. The process represented the first attempt in Britain by any specialty group to collect national activity and outcome data.The measurement of outcomes from medical or surgical interventions is now seen as good practice, but publication of individual doctors' results remains controversial.After the General Medical Council hearings and the subsequent Bristol Royal Infirmary Inquiry into paediatric cardiac deaths, cardiac surgeons expected a stinging attack on British cardiac surgical practice. What emerged instead, in 2001, was a comprehensive report highlighting many of the difficulties facing frontline clinicians and managers in the NHS (available at: www.bristrolinquiry.org.uk/finalreport/index/htm) (Learning from Bristol, 2004).The report included 198 recommendations, of which two stated that patients must be able to obtain information on the relative performance of the trust and of consultant units within the trust. This led to an increasing belief that the interests of the public and patients would be served by publication of individual's surgical performance in the form of postoperative mortality.A precedent for this existed in the USA, where in 1990, the New York Department of Health published mortality statistics for coronary surgery for all hospitals in the state, and has published comparable data each year since (Chassin et al., 1996). A newspaper, Newsday, successfully sued the department under the state's Freedom of Information Law to gain access to surgeon specific data on mortality, which the newspaper published in December 1991, evoking a hostile response from surgeons. New Jersey and Pennsylvania states have also started publishing mortality data, but the practice has not yet spread to any other state or country.Cardiac surgeons had seen this coming, so during the Bristol Royal Infirmary Inquiry the Society of Cardiothoracic Surgeons of Great Britain and Ireland tried to redress perceived deficiencies in surgeons' approach to national data collection and audit by producing unambiguous guidelines on data collection and clinical audit in cardiac surgical units (available at: www.scts.org) and by debating how to measure their clinical performance.A detailed analysis by the Nuffield Trust has shown that the arguments for and against publication are finely balanced (Marshall et al., 2000). The reason for publication determines the way such data are presented. The two key reasons are either to facilitate patient choice or to demonstrate safety. Publishing for patient choice requires detailed, risk adjusted tables of outcome published in a comparative fashion. Publishing to indicate whether a surgeon is safe or not requires agreeing a threshold of unacceptable mortality and then showing where each individual surgeon's results lie relative to that threshold.The national service framework for coronary heart disease, launched in early 2000, included clear recommendations for comparative audit based on the Society of Cardiothoracic Surgeons' clinical dataset (Keogh et al., 1998). As part of this framework, data collection in England would shift from the Society of Cardiothoracic Surgeons to the central cardiac audit database, part of the National Clinical Audit Support Programme in the NHS Information Authority.The price the surgical community had to pay for these long term benefits was the publication of individual surgeon's results: the first set of results would be released in some form by the end of 2004. But to retain the confidence of all parties - surgeons, the public, and the healthcare regulators - the project would be overseen jointly by the surgical community, the then Commission for Health Improvement, and the Department of Health (2000).Now these results are published, mostly through requests based on the UK's Freedom of Information Act, medicine in the United Kingdom has crossed a threshold into a new era. Cardiothoracic surgeons will have shown that it is possible for a surgical specialty to review its own performance at an individual clinician level by professional consensus. This system is not perfect; it is a first step, which, in the words of Alan Milburn in 2003, when he was secretary of state for health, "has opened a door which other branches of medicine will need to enter" (Department of Health, 2000, p. 17). Most importantly, cardiac surgeons will have opened a more general debate that will revolve around the balance between the relative influence of individual physicians and institutional influences on patient outcomes and how this relation translates to transparent public accountability.The final question is whether, with transparent systems in place to maintain standards, it is necessary to publish a list of names, or can the public good can be served just as well by the knowledge that appropriate mechanisms are in place and independently regulated.
Conclusion: The purpose of this paper was to present a general view of the quality improvement strategies, based on national and regional health care registries in Portugal, Sweden and the UK. It was also our intention to highlight the role of those registries in these three European countries in order to improve patient care.To improve care for their citizens and to realise the potential efficiency gains, policymakers are looking for the methods and tools to measure and benchmark the performance, and quality of their health care systems. In this way, the implementation of national health registries, and the effective use of this data, assumes a central point in the agenda of the politicians in most European countries, and in other countries all over the world.The National Health Registries have attracted great international attention, and represent a unique resource, in quality improvement perspective, for the Swedish Health and Medical Services.In both countries, Portugal and the UK, the imperatives of accountability and quality improvement make the wider development and implementation of national quality registries inevitable.In this paper we could see some differences and similarities between these three countries with a common aim; to improve quality of care, delivered on equal terms for the whole populations, and in an effective and efficient way.
|
Certifying a university ENT clinic using the ISO 9001:2000 international standard
|
[
"Quality management",
"ISO 9000 series",
"Quality standards",
"Ear",
"Nose and throat medicine",
"Germany"
] |
Summarize the following paper into structured abstract.
Introduction: To keep treatment efficient and quality assured, highly specialised treatments in some cases for rare illnesses with complicated healing processes have to be integrated into an overall quality plan. To cope with changing clinical routines, organisational and structural changes have to be realised and new regulatory mechanisms introduced. Against statutory duties in various countries to introduce internal quality management (QM) systems, the increased importance of this subject has led to numerous activities and successful health system certifications (Peters et al., 2004; Doebert et al., 2005; Jansen-Schmidt et al., 2001; Beholz et al., 2003, Staines, 2000).Goethe University Hospital managers decided to introduce an internal QM system using ISO 9001:2000. This standard favours a process-oriented approach and describes requirements for the QM system and gives advice about appropriate and suitable application, especially patient expectations. In this context the term "process-oriented quality management" is to be understood as follows. Patients, referring doctors and others place certain demands on clinic staff acting as service provider. In the clinic, these requirements are put to practice by certain "processes" (e.g. treatment paths) for which the clinic director is responsible. He or she makes personnel, material and equipment (e.g. to perform surgery) available. To do this appropriately, certain information on the rendered services (e.g. patient satisfaction) is necessary and must be provided using measurements and analyses. Traditionally, staff in each ward or clinic department developed their own rules for daily routines and handling certain problems. These rules were tested for effectiveness and practicability and, if necessary, amended. It then was decided in which clinic department routines would be most effective. Thus, superfluous work could be eliminated. All process sequences were defined, tested with respect to practical applicability, added to a written manual and finally authorised for use. An additional challenge was structuring and coordinating all work sequences, especially at their interfaces to form a general master plan. This led to rationalising the whole sequence of operations and avoidance of redundancies. Additionally, criteria on how staff, material and medical equipment can be used optimally became clear. The aim was to continuously optimise work flow within the QM systems (Figure 1).Goethe University hospital ear, nose and throat (ENT) staff then inaugurated its QM system. The aim was to implement binding, transparent and intelligible rules for all staff and in all work concerning patient treatment, teaching and research. The clinic's official external (ISO 9001:2000) certification was achieved two years after implementation; thus confirming that the clinic's QM system matches national and international standards regarding health insurance companies, patients and referring doctors. In this article, therefore, we hope to give useful information to others wishing to realise similar projects.
Method: The newly introduced QM system was based on ISO 9001:2000, an international standard applicable to many different professional groups. It defines the QM system certification requirements. We followed the so-called 12 steps plan to help us implement the QM system:1. Clinic managers decide to introduce a QM system.2. QM representatives are nominated.3. QM executive board convened.4. Staff informed and educated.5. Quality objectives defined.6. Actual state analysis completed.7. Actual state analysis outcome evaluated.8. Quality manual and other documentation established.9. System implemented.10. Internal audit realised.11. QM system externally audited and certified.12. QM system and repetitive audits maintained (after Kahla-Witzsch, 2003).Clinic director's decision to introduce a QM system
Results: External audit and certification
Discussion: The basis of Goethe University Hospital ENT clinic's QM system is ISO 9001:2000, formulated generally for application to different businesses. At first sight, the standard's somewhat technical phrasing is rather unfamiliar to healthcare professionals. Furthermore, adaptation to public health service institution needs has to be made. Nevertheless, the ISO 9001:2000 standard's clear instructions for all sectors provide a good framework. How this frame is filled depends on the department's requirements, aims, size and structure. The ISO 9001:2000 aims at a process-oriented approach (Edelstein, 2001; Sweeney and Heaton, 2000). Not only is product or service quality (though this has priority) assessed, but also the processes that lead to the product/service have to be defined and must be controllable as well. Developing and introducing a QM system into hospital routines takes place in clearly defined steps. Design and implementation, however, take their cue from service user needs and expectations, the organisation's aims, as well as the clinic's or department's size and structure. Patients and staff alike profit considerably from re-organisation and re-structuring (van den Heuvel et al., 2006). The QM system gives transparency to wards and departments and enhances employees' awareness of the clinic's workflow. Involving all staff members in designing and implementing the QM system clearly results in a higher motivation as employees identify with "their" QM system. Positive views on internal changes and their willingness to improve influences job satisfaction positively. Consequently, this reflects favourably on patient handling. The process-oriented approach, which aims to be strongly patient-orientated in our QM system, can improve efficiency and quality outcomes considerably.The first phase between implementation and certification is work-intensive and time-consuming. However, two-and-a half-years after successful certification, we can point out that re-structuring and re-organising our services has enhanced efficiency in all sectors. In short, the whole clinic profited from this re-organisation. QM does not end with certification. It is no transitory task but has to be understood as continuous process. QM implies constant vigilance of non-compliance and reasons. Therefore, staff and patient evaluations are done regularly. Wrong or undesirable events have to be pinpointed, analyzed and evaluated, non-conformance causes identified and corrective actions taken. If these corrections are effective after four weeks then the ENT clinic QM board adds them to the quality manual. Thus, hidden or new flaws in the system are found early and amended immediately.
|
Doctoral boot camps: from military concept to andragogy
|
[
"Writing",
"Boot camp",
"Education",
"Research culture",
"PhD"
] |
Summarize the following paper into structured abstract.
Introduction: "Publish or perish" is an old adage in academia that embodies the pressure for academics to conduct research and publish. Such pressure is a direct consequence of universities' fervour to build a strong reputation, which according to Linton et al. (2011) is largely dependent on the output and quality of the research being conducted. Rizzo Parse (2007) argues that over the years, university leaders "have been increasingly focusing attention on what they call building a research culture". It is believed that having such common values and beliefs ingrained in the day-to-day practices would ultimately ensure a drive towards research and publications.However, building a research culture is not an easy task and even more so for teaching-focussed institutions. There is an overabundance of variables that can either positively or negatively affect the setup and progress of such endeavour. To this effect, Rizzo Parse (2007) argues that "birthing a research culture requires at least three major essentials: a commitment to become known as a scholarly institution, qualified leaders to guide the planning and development of such an undertaking, and financial and other resources to support the endeavor".Hannover Research's report (Hannover Research, 2014) contends that amongst others, "successful [higher education] institutions provide significant support to faculty research efforts". Although there is a plethora of strategies that can be used to develop a research culture, Hannover Research places faculty training and support programmes as one of the most influential factors for the development of a research culture. Freedenthal (2008) in a survey with 100 faculty members found that 97% would provide grant-writing support but that manuscript writing seminars were relatively common with nearly 55% of faculty members offering such support.Another example of such faculty training and support programmes is academic boot camps. Traditionally, boot camps have been confined to the military world designating a brief but intensive training programme aimed at bringing recruits or confirmed militaries up to the level. The same concept has later been transferred to other areas and recently academia where faculty staff members would come together for a short and intensive working session on a particular aspect of their research journey.Building on Rizzo Parse's notion of building a research culture, Hannover Research's training and support programme, and the growing momentum of academic boot camps, this research studies take a case study approach to explore the use of academic writing boot camps as a tool to building research culture in a teaching-focussed institution moving towards a research institution. This research is even more pertinent to the broader literature as it takes place in an institution within the Small Island Developing States (SIDS). Thirty-nine countries have been categorised as SIDS. In general, these countries are small, remote and share similar characteristics and challenges towards sustainable development (United Nations, 1994). The current study was conducted in the largest Mauritian private tertiary education institution, which has been offering courses from an Australian partner university for a nearly 15 years. With the partner institution seeking the "Association to Advance Collegiate Schools of Business" (AACSB) accreditation, it was imperative for the Mauritian counterpart to embark on research journey in order to meet the requirements of AACSB, which specifically needed staff members to conduct research. This situation constituted a significant challenge for the academics as they were suddenly expected to add another dimension (research) to their work activities, which had been focussed only on teaching for nearly two decades. To assist the academic staff members on this transition period, a number of strategies have been identified and implemented. Aligned with Rizzo Parse (2007) and Hannover Research (2014), some example of measures taken are (1) sponsorship on PhD study, (2) reduced lecturing workload, (3) setting up a dedicated research office and (4) running a number of workshops aimed at enhancing their research skills. One of these workshops consisted in running a writing boot camp.Although research to date on the effectiveness of writing boot camps in either strengthening participants' skills or enhancing individual and collective long-term academic goals is scarce, some authors have found positive outcomes from such interventions. However, many of these studies have relied solely on post-programme questionnaires rather than more longitudinal measures such as staged evaluation focus groups and assessments of the quantum of theses or journal articles produced over periods of time.The aim of this study is thus to bridge this gap by exploring "Writing Boot Camp Cycles" rather than a single event. In addition, this research brings in novelty by looking at a particular context. Whereas most research studies on boot camps are carried out in established institutions, this research brings in novel perspective from that of a budding research institute and even more so in the context of a small island developing nation.It is believed that the insights gained, and the results of this research can provide both established institutions struggling to improve research output and emerging research institute in building a sustainable research culture.
Literature review: Concepts and definitions
Methods: The objectives of this study are twofold - first to examine the effectiveness of writing boot camps in helping early career researchers and PhD students in achieving their writing goals. Second, this study attempts to understand and explain the role of writing boot camps in building a research culture in institutions transitioning from primarily teaching to "teaching and research" roles. In addition, consonant with some of the shortcomings and challenges identified in the literature (Busl et al., 2015; Von Isenburg et al., 2017, p. 170), the boot camps were designed and conducted in such a way as to maximise successful writing outcomes.A case study approach was deemed as being the most appropriate method to gain insights into the effectiveness of the boot camp events. Indeed, Yin (2014, p. 50), argues that case study is best suited when "a 'how' or 'why' question is being asked about a contemporary set of events, over which a researcher has little or no control". In this case, the insights sought after revolved around the how and the why of boot camps in setting up a research culture and also that the researchers did not have any control over behavioural events.Yin (2014, p. 50) further argues that methods are not mutually exclusive in the sense that other methods, for example a survey, can be used within case studies and that case studies and in turn be used within other methods. The critical aspect is to ensure that the research question is well understood and is being answered rigorously through the selected methods.In the case of this study, getting participants insight into the usefulness and effectiveness of the boot camp was essential, and it was important to capture such insights at various points in time. Therefore, an initial boot camp with 33 academic staff members was run after which a survey questionnaire was administered to all participants. A follow-up focus group was later held with 36% of the initial boot camp participants to gauge the effectiveness of the boot camp in sustaining research activities after which a second boot camp cycle was run.The following section outlines the various steps in the preparation, implementation and follow-up of the writing boot camp.Initial boot camp
Analysis and discussion: The most appropriate analysis was carried out once the data was gathered. For the survey after the initial boot camp, simple descriptive analysis was carried out. Since it was an exploratory study and the aim was to get insight on the effectiveness of the boot camps on individual progress. The data were cleansed, and Microsoft Excel was used to generate the descriptive statistics. The amount of data and the rather exploratory nature of the research did not warrant for more sophisticated analysis tools to be used. Still using Microsoft Excel, thematic analysis was used to extract common themes from the open-ended questions. As for the focus group, a transcript of the session was generated and thematic analysis carried out.Perceptions of the initial writing boot camp
Conclusion: This study extended the concept of writing boot camps to writing boot camp cycles coupled with differentiated facilitation in supporting the creation of a research culture amongst academics. The research was conducted in a teaching-focussed higher education institution within a SIDS.Overall, the participants shared that the boot camp cycles helped them achieve their writing goals. The findings of this study indicate that it is crucial to have a proper environment for an effective boot camp session. The boot camp facilitator should be in a position to provide timely and individualised feedback. The study revealed that peer support is as important as facilitator support in the success of the boot camp. While participants who set clear and realistic goals were the ones who achieved their writing targets more easily, the study pointed that other factors such as self-discipline and focus were also important in helping the participants achieve their writing objectives. Furthermore, this study indicate that it is vital to customise writing boot camp sessions depending on specificities of the participants such as, field of research, early career versus experienced researchers or type of research output. In other words, there is no one-size-fits-all writing boot camp structure. Instead, it is imperative for institutions to gauge the needs of the researchers and provide adequate coaching for them to thrive. Writing boot camps can offer the right ingredients to promote research amongst academic staff members in such settings as outlined in this research.The outcomes of this study are pertinent to higher education institutions wishing to set up a research arm, and more specifically in developing countries with budding research culture and limited research resources. The main contribution of this study showed that using boot camp cycles with differentiated facilitation can help academics achieve their writing goals and consequently, help create a research culture. While the outcomes seem promising, a few avenues can still be explored. Since this study looked at two boot camps, future studies can consider running more cycles over a longer period of time. It is further recommended to conduct debriefing sessions after each boot camp and use the outcomes to refine the subsequent sessions. Moreover, it is also recommended for future researchers to classify and group participants based on their research interests or objectives.In essence, although building a research culture is a lengthy process, the current study indicates that writing boot camps cycles with facilitators can actually be used to activate a research culture and garner momentum for academic staff members to publish.
|
Capability, social capital and opportunity-driven graduate entrepreneurship in Tanzania
|
[
"Entrepreneurship",
"Social capital",
"Tanzania",
"Capability approach",
"Conversion factors"
] |
Summarize the following paper into structured abstract.
Introduction: In Tanzania, annually, about 700,000 graduates enter the labor market but only 40,000 of them find jobs at the government or established companies despite this being their preferred place of work (Ngalomba, 2013). The rest of them continue their education, find a low-profiled temporary job or enter into the informal sector. Despite significant government attempts to boost graduate entrepreneurship, and the introduction of entrepreneurship education at university level (Mwasalwiba, 2012; Weiss, 2015), few graduates actually embark on an entrepreneurial path. While this problem is not unique to the Tanzanian context, the country's economic-political history of the Ujaama (or African socialism) seems to amplify the problems (DeJaeghere, 2013) making Tanzania a suitable case for investigation.
Entrepreneurship in Tanzania: Historical developments
Conceptual framework: Scholars have differentiated between necessity or subsistence-driven and opportunity-driven entrepreneurship (Bosma and Harding, 2007). Necessity-driven entrepreneurs are pushed into enterprising activities because all alternative options for work and income generation are absent or unsatisfactory. Alternatively, opportunity-driven entrepreneurship follows individual's personal desire and choice to exploit a particular opportunity and seek self-actualization (Smallbone and Welter, 2001). While entrepreneurship, in developing countries like Tanzania, is often understood primarily as necessity entrepreneurship, this particular study focuses on opportunity-driven entrepreneurship. Traditionally, scholars have attributed the difficulty to develop a fully functioning opportunity-driven entrepreneurial ecosystem in Tanzania to the poorly functioning institutional context. In line with Baumol's (1990) arguments that a country's institutional framework determines whether productive, unproductive or even destructive entrepreneurial activities will take place, the authors have suggested that overcomplicated paperwork (Haselip et al., 2015) and a non-supportive financial sector (Marwa, 2014; Mwasalwiba et al., 2012) are keeping young graduates from engaging in entrepreneurial activities. While valid, these institutional arguments do not fully explain why the majority of graduates continue to struggle while some are obviously able to overcome these constraints.
Methodology: The qualitative nature of the research question prompted to conduct an interpretive qualitative research (Denzin and Lincoln, 2005) based primarily on interviews and supplemented by policy documents that were collected and analyzed in the Spring and Summer of 2016. The aim of the qualitative inquiry was to uncover the underlying mechanisms that stimulate and prevent Tanzanian graduates to consider a career in entrepreneurship.
Findings: Entrepreneurship as a potential valued functioning among graduates
Discussion: This study explored why most Tanzanian graduates do not engage in entrepreneurial careers even in the face of a lack of alternative career opportunities, while some do and how we can explain such different responses to the same institutions. The empirical findings show that the entrepreneurial climate in Tanzania is clearly in a state of transition. In line with the work of Kabanda and Brown (2015), this study shows that increased positive attention from policy makers, educators and the media combined with the introduction of faster internet and mobile technologies have already spurred a small group of graduates into entrepreneurial action. As such, these findings reflect those of Lindeman's (2012) who already pointed to the importance of conversion factors like devices and external support in translating valued functionings into actual functionings. However, this study also shows that not all graduates are equally likely to benefit from such conversion factors. Indeed, the vast majority does not seem to be able to utilize their agency to translate entrepreneurial ambitions developed while in school into action once graduated. As stated by Sen (1980), the motivation to act as an entrepreneur depends on the personal conviction that success is possible, maybe even within reach. Based on the findings it is argued and shown that, in the Tanzanian case, access to social capital (e.g. Hite, 2005) plays a pivoting role in this process toward empowering graduates to develop such convictions In fact, the uneven distribution of social capital explains why few graduates can cultivate the personal freedom and power to develop entrepreneurial functions. For instance, one area where graduates clearly lack agency is the start-up capital: most of them lack both lack the skills and the connections to seriously pursue obtaining access to external funding from banks or venture capitalists and give up at the first rejection. Likewise, complying with government regulations or obtaining a lucrative government or multinational contract proves difficult without access to the right connections, thus limited in the entrepreneurial capability of the target group. Simultaneously, the inability to access funding and utilize connections contributes to the inability to develop products that meet the necessary standards to be granted government contracts or equally interesting contracts from foreign direct investors. This leads to vicious cycle that is particularly difficult to break, and leaves many graduates without actual entrepreneurial capability unless they have the appropriate social capital. Graduate entrepreneurs clearly lack the agency to change this situation on their own. Rather, it is up to powerful actors like government functionaries, managers of multinational corporations and established entrepreneurial families to change their behavior toward aspiring graduate entrepreneurs so that they can actually act upon opportunities in the Tanzanian environment. This means that unless such institutionalized actors actively start to facilitate graduates to establish more relevant and powerful connections and gain access to the necessary social capital the Tanzanian market is likely to continue to be dominated by a small number of specific and powerful groups with a clear competitive advantage. Stimulating dominant actors to become more inclusive toward startups may therefore turn out be a key conversion factor toward enhancing graduates entrepreneurial capability. This notion of social capital as a key conversion factor that enables aspiring entrepreneurs to translate valued functionings into actual functionings offers a novel way of looking at how entrepreneurial capability can be enhanced.
Conclusion and recommendations: Contributing to the literature on entrepreneurship incidence and entrepreneurial capability, this study concludes that entrepreneurship capability is unequally distributed amongst young educated Tanzanians. This study shows that conversion toward real capability in Tanzania depends on personal insight, reflection and resoluteness - in other words, personal agency which in term depends on the access to social capital. Lack of capability can be tackled when adequate conversion factors are detected, reforms that can take pace in policy, education and financial services and the improved coverage by social media.
|
Consumer preferences for wine applying best-worst scaling: a Spanish case study
|
[
"Consumer behaviour",
"Market segmentation",
"Marketing strategy",
"Wines",
"Spain",
"Consumer psychology"
] |
Summarize the following paper into structured abstract.
Introduction: The commercialisation of wine in Spain is problematic due to two concrete circumstances:1. the decrease in wine consumption[1] because of a consumer shift toward substitute drinks; and2. the greater presence of national and foreign wine in the interior market, which involves an increase in business competitiveness.In turn, this increase in business competitiveness often causes additional difficulty for consumers who have to process a lot of information on wine. Sometimes it can generate a state of confusion in the stage previous to purchase. Consumers can then be affected negatively in the process of making a decision and can be led to making less than optimal choices (Walsh, 1999).Including diverse information on wine bottle labels is obligatory, such as the degree of alcohol, region of origin, fiscal domicile of the enterprise, membership in a certain designation of origin, etc. This can make consumers feel bewildered during the purchase process, since wine has a global market of over 100,000 brands, several dozens of grape varieties and many producing countries (Goodman et al., 2005; Johnson and Bruwer, 2004; Overby et al., 2004; Rasmussen and Lockshin, 1999). It could simultaneously cause a lack of trust regarding wine producers (Casini et al., 2009).In addition, the high level of fragmentation in Spanish wine production does not allow the majority of companies to place their objective in serving the totality of the market. Instead they must centre on specific market segments, where they try to differentiate themselves from their competition to satisfy their clients in the most efficient way possible.To develop specific strategies in different market segments, the wine attributes must be determined (through surveys for example) that have the greatest influence on consumer choice in each segment. The advantage of surveys is that they allow the acquisition of more knowledge about real consumer preferences. However, evaluation through a panel of consumers, for example, determines the wine that consumers bought but not necessarily the wine that they had desired to buy (Goodman et al., 2005).Nowadays, market researchers are greatly interested in the composition and formation of consumer preferences. Such preferences, which depend on information received by the consumer, are formed both by extrinsic attributes (price, origin, type, etc.) and intrinsic attributes (colour, flavour, etc.) (Becker, 2000). The former are part of the production process while the latter are part of the product itself. In turn, intrinsic attributes such as the flavour of a wine can only be determined by consumers during or following consumption, but not at the time of purchase.There are many ways to measure consumer preferences. Most common are surveys with rankings or ratings and consumer panel data, which give details on individual purchases. Both of these methods have problems. Respondents to surveys do not use ratings or rankings the same way by every respondent and the results are subject to interpretation (Cohen, 2009; Cohen and Neira, 2003; Finn and Louviere, 1992).Consumer panels are a powerful technique, but have several weaknesses. First, they are expensive and only a very few wine companies can afford to obtain this data, so it will not help the majority of wineries or channel members. Secondly, it only allows analysis of what consumers have purchased; patterns can be discerned, but new attributes (or combinations) cannot be tested. Thirdly, there is usually not enough information about the consumers to allow for segmentation, which is necessary, especially for smaller wineries targeting niche markets (Goodman et al., 2005). Therefore, we will not discuss analysis of consumer panel data in this article.Among studies that use rankings or ratings to measure consumer preferences, the conjoint analysis (CA) technique (Green and Rao, 1971), identifies, explores and quantifies consumer attitudes to predict what they really prefer.In conjoint analysis, surveyed individuals report on the global preference for a product profile from a limited number of attributes and the researcher estimates the relative importance of each attribute.Using CA has several advantages such as estimating psychological trade-offs that consumers make while evaluating several attributes together, measuring preferences at the individual level, realistic choice or shopping task, the ability to use physical objects and to develop needs based segmentation. However, CA also has certain disadvantages such as the complexity of designing conjoint studies, with too many options, respondents resorting to simplification strategies, the difficulty in using it for product positioning research, respondents being unable to articulate attitudes toward new categories and not taking into account the number of items per purchase. Therefore, it can give a poor reading of market share (Sattler and Hensel-Boner, 2000).In contrast, the best-worst (BW) method, contained within a subset of multinomial logit models (Marley and Louviere, 2005), has been demonstrated to be very precise in determining preferences (Auger et al., 2007; Finn and Louviere, 1992). Its main advantages are a high differentiation in the degree of importance that consumers grant to attributes and prevention of problems of bias in evaluations (Casini et al., 2009). It is especially indicated for comparisons between different socio-economic scenes (Cohen, 2009; Cohen and Neira, 2003; Flynn et al., 2008; Goodman et al., 2005, 2008; Lee et al., 2008). It is easy to use and understand (Goodman et al., 2005), making it particularly indicated in the sphere of business management.The genesis of the BW method, which uses maximum difference scaling, comes from a little investigated deficiency of conjoint analysis. Lynch (1985) warned that the conjoint analysis additive model does not permit separating the importance from the value of the scale. That is to say, conjoint analysis allows intra-attribute comparisons of levels but does not permit cross-attribute comparisons. This is because the scale of the attributes is unique in each attribute and not a method of global scaling.Using the advantages of applying the BW method, the objective of this paper is to determine the wine attributes with the greatest influence in the process of consumer choice, and particularly, the differences among attributes depending on consumer gender, monthly family income and age. The final aim is none other than to try to identify the most important wine attributes that the consumer uses in the process of choosing, so they can be used by wine-producing companies in marketing strategies.For this paper the most representative attributes for choosing wine by consumers in the process of purchasing were selected from the literature review, interviews with experts, a previous questionnaire and similar papers published in other countries. The 11 attributes identified as the most influential are:1. price;2. tasting the wine previously;3. region of origin;4. grape variety;5. ageing;6. brand name;7. alcohol level below 13 per cent;8. design of the bottle and label;9. matching food;10. recommendations by friends and relatives; and11. organic production.A brief literature review is shown next to provide a context for this paper before addressing the methodology, then the results and finally the most relevant conclusions together with the business strategies that can be derived. Similar research carried out in other markets is included, what the main attributes of wine are that consumers use in their selection process and how consumer socio-economic characteristics can influence those choices.
Literature review: There are many papers that have analysed wine consumer behaviour and attempted to explain how various wine attributes can influence consumer purchasing behaviour and to what extent consumers' socio-economic characteristics somehow affect this behaviour.The solution does not come easily. In fact two characteristics prevent wine from being compared to other food products. On the one hand, there is a large variety of brands from which to choose (Goodman et al., 2005). On the other, besides making a search, a certain component of "credibility" is necessary in judging a wine, since the flavour and characteristics of a wine having the same brand and the same grape variety can vary according to the year of vintage (Lockshin et al., 2006).In addition there is the difficulty of processing the large amount of information available to the consumer, which creates confusion in the pre-purchase process and negatively affects consumer decision-making. This brings about purchases that are not completely satisfactory.In an attempt to lessen those unsatisfactory purchases and simultaneously help wine businesses in their commercial strategies, several papers have identified the attributes in different markets that could have the most influence on consumer wine choice:* an attractive label (Atkin et al., 2007; Barber et al., 2006; Rochi and Stefani, 2005; Seghieri et al., 2007; Thomas and Pickering, 2003);* grape variety (Balestrini and Gamble, 2006; Felzensztein and Dinnie, 2005; Felzensztein et al., 2004; Goodman et al., 2006a; Jarvis et al., 2007; Ling and Lockshin, 2003; Lockshin and Hall, 2003; Steiner, 2002);* brand name (Loureiro, 2003; Yue et al., 2006);* region of origin (Keown and Casey, 1995; Orth et al., 2005; Perrouty et al., 2006; Schamel, 2006);* recommendations by friends (Wansinsk et al., 2006);* suggestions by the sommelier (Manske and Cordua, 2005);* degree of alcohol (Lockshin and Rhodus, 1993);* having read about wine at home (Unwin, 1999);* information on the store shelf (Atkin et al., 2007);* tasting the wine previously (Casini et al., 2009); and* matching food with information on the back label of the bottle (Mueller et al., 2010).Nevertheless, all these attributes have a different impact on consumers depending on their socio-demographic characteristics, such as family income level (Barber et al., 2006; Felzensztein et al., 2004; Goodman et al., 2006b), age (Bruwer et al., 2002; Gluckman, 1990; Goodman et al., 2006b; Seghieri et al., 2007) and gender (Barber et al., 2006; Barber, 2009; Goodman et al., 2006b; Mueller et al., 2010).Grape variety is one of the most influential wine attributes in the choice of a wine (Thomas and Pickering, 2003; Felzensztein et al., 2004; Balestrini and Gamble, 2006). However, this influence changes depending on the analysed grape variety (Ling and Lockshin, 2003), which has a much more important effect on wine consumers in emerging countries (Lockshin and Hall, 2003).Along this line, Goodman et al. (2005) state that Australian wine consumers are more influenced when buying wine by the variety of grape with which the wine has been made and by its region of origin. Meanwhile, Israeli consumers are more influenced by friends' recommendations and the brand.In turn, Jarvis et al. (2007) state that in Australia, the white Chardonnay and Riesling varieties and the red Shiraz and Cabernet Sauvignon varieties are those that reach higher levels of loyalty in wine consumers. Other varieties are chosen when they want to try something new.Regarding the region of origin attribute, Schamel (2006) estimates that this is the main consumer decision-making attribute in those regions that sell principally red wine. Balestrini and Gamble (2006) in turn extend the concept of region of origin to country of origin, finding that the country of origin is the most influencing attribute in Chinese consumers when purchasing wine. Along the same line, Yue et al. (2006) state that not only the region of origin but also the brand are the two main attributes for promoting wine.Regarding inexperienced wine consumers, they especially value the region of origin (independently of brand and price), as opposed to expert consumers, who consider the brand as the perfect moderator of the region-of-origin equity. Additionally it has been verified that as consumer experience increases, they chose a wine according to a combination of attributes instead of esteeming a single one (Perrouty et al., 2006).As to the degree of influence that labels exercise on the wine bottle, early researchers seem to confirm that wine consumers buy with their eyes (Rochi and Stefani, 2005), a more frequent behaviour in women than in men, since they are more influenced by colours, images, photographs and slogans (Thomas and Pickering, 2003; Atkin et al., 2007). However, the information on the back can add confusion to the selection process, especially in women (Barber et al., 2006). This is a paradoxical situation since it must not be forgotten that such labels are for providing more information to contribute to being chosen (Charters et al., 2000).In this way, given that consumers esteem labels differently, they can be designed depending on the market segment they address - for example, a basic label for the regular buyer, another one amplifying details for the interested consumer and a creative label for more sensitive consumers (Seghieri et al., 2007).Another factor that influences buying decisions is the bottle sealing system, especially in the case of women. They feel that wax seals give the product more freshness, although foil coverings indicate higher quality (Barber et al., 2006).Other papers determined that one of the influential attributes in consumer wine choice is having tasted it previously (Casini et al., 2009) while others established that consumers place more confidence in various types of recommendations of the wine (Goodman et al., 2005).In this sense, Balestrini and Gamble (2006) and Wansinsk et al. (2006) determined that to reduce the risk of making a poor decision in store-bought wine, consumers rely on recommendations, choose recognised quality brands, and seek help from the retailer. Meanwhile in a restaurant the ways to reduce the risk of a poor wine choice are the waiter's recommendations, food and wine matching suggestions and the possibility of small wine tasting.Research by Manske and Cordua (2005) goes in this same direction. They found that the role of the sommelier is highly important; the sommelier can explain intrinsic and extrinsic wine attributes in such a way that consumers can analyse them more easily, thus helping them to make a better decision.Other attributes that can influence consumer choice are the alcoholic content of the wine (Lockshin and Rhodus, 1993) and having read about wine at home (Unwin, 1999), a more common activity in men than in women.Finally, all the papers show that there exists a wide range of attributes that influence consumer wine choice, conditioned by their socio-economic characteristics. Likewise, they clarify that there is no final conclusion and that it is necessary to continue working in various settings to contribute more information about the process of wine consumer choice.
Methodology: Data collection
Results and discussion: Differences among attributes depending on consumer gender
Conclusions and future research: In general, the attributes that seemed to condition wine consumers the most at the time of choosing wine were tasting it previously, region of origin, price, and recommendations by friends and relatives. Attributes that conditioned them the least were bottle and label design, brand name and a low alcoholic content, in that order.Specifically, taking the main attributes into consideration in the segmentation by gender, men were more conditioned when selecting wine by tasting it previously, region of origin, grape variety and ageing. Meanwhile, women were more influenced by organic production, food and wine matching and the design of the bottle and label.According to segmentation by income, while the low-income segment was conditioned the most by price and recommendations, the medium- and high-income segments were conditioned by region of origin and grape variety. As income increased, organic production and the design of the bottle and label were more highly valued.According to segmentation by age, the younger the consumers were, the greater importance they gave to previous wine-tasting, price and organic production. As age increased, region of origin became more important.Of the three segmentations, the ones by gender and age presented a similar discriminant capacity. However, the segmentations done by gender was the greatest, and to start with would be the most recommended to be used by wine enterprises in their commercial strategies.Managerial implications
|
Intellectual capital performance of financial institutions in Malaysia
|
[
"Intellectual capital",
"Financial institutions",
"Malaysia",
"Knowledge economy"
] |
Summarize the following paper into structured abstract.
Introduction: The recent changes in the global economy, consisting of complex, dynamic and competitive environment have led to a difference between the modern approach of value creation and the traditional way of monitoring operations. In meeting these challenges of the new global economy, it leads to the importance of knowledge-based resources as the main factor in sustaining competitive advantage of the firms. In achieving the aims of the new economy, Malaysia has embarked on a mission to develop a knowledge-based society by introducing the Knowledge-based Economy Master Plan in 2002. This plan outlines various strategies to accelerate the transformation of Malaysia to the knowledge-based economy (Economic Planning Unit, 2001). Its purpose is to achieve a sustainable economic growth whereby Malaysia will no longer depend on investments in capital or physical assets. With that, the growth can be driven by productivity and innovation supported by effective management of both tangible and intangible resources, namely the intellectual capital (IC). Generally, IC is made up of the combination knowledge of human, structural and relational resources (Abdul Latif and Fauziah, 2007).In this new economic era, when knowledge-intensive companies tend to dominate in the finance sector, it is crucial to maximize the utilization of resources, especially IC. Even though there are other important factors that contribute more towards firms' performance, the study believes that IC could be one of the predictors towards firms' performance. In a knowledge-based economy, it is expected that the number of knowledge workers and opportunities will increase in Malaysia and this phenomenon will force firms in enhancing their IC. Thus, the purpose of this paper is to measure the IC performance of financial institutions in Malaysia. Financial institutions play a crucial role in the economy of Malaysia where it allows transfer of funds from surplus spending units to deficit spending units in the most efficient manner. As stated by Goh (2005), physical capital is crucial for financial institutions' operations, but it is eventually the IC that determines the quality of services provided to the customers.Traditionally, a company is considered as having a competitive advantage if it is able to produce the same or similar product at a lower cost. Thus, competitive advantage can be defined as having lower cost, which makes the company enjoy a higher profit margin. According to Hazlina and Zubaidah (2008), IC is considered as a source of competitive advantage, which can increase profit of a company. Bontis et al. (2000) found that the key reason attributed to corporate success is by leveraging company. Goh (2005) concluded that domestic banks are generally less efficient compared to the foreign banks in Malaysia. The study by Abdul Latif and Fauziah (2007) indicated that the average Malaysian firms employ elements of IC in their business model. This study attempts to provide some insight on the extent of measuring the IC performance of financial institutions, which include banks and non:* identify the IC performance of financial institutions in Malaysia;* compare the contributions of three IC components in the VAICTM model respectively; and* examine the relationship of IC on the financial institutions' performance.This study enables the firms to have a more definite and direct understanding of the composition of IC and to evaluate its developing tendency periodically. This study also enables the company to understand the functions of various IC defined within the setting of their line of business and to find out and strive for the main IC within and outside their firms. Finally, following the VAIC model, the financial institutions can apply knowledge management and the assessment of their employees' achievements by setting the aims in enhancing the IC for each department and each employee.This paper is organized into five sections. The first section is the brief overview of the research, including objectives and contribution of the study. It follows by literature review in the second section to discuss the theoretical background of the research and previous studies on IC. The third section is to discuss the source of data, research methodology and framework. The fourth section concentrates on interpretation of the findings and discussion. Finally, the fifth section concludes and gives recommendation for future research.Intellectual capital (IC)
Literature review: There are few studies on IC performance in Malaysia. Bontis et al. (2000) started the IC research in Malaysia. They extended the study of Bontis (1998) in Canada using a psychometrically validated questionnaire to examine the inter-relationships of IC and business performance for service and non-service industries in Malaysia. They found that HC is important regardless of industry type. HC has a greater influence on how a business should be structured in the non-service industries compared to the service industries while customer capital has a significant influence over structure capital irrespective of industry. In the main, the development of structure capital has positive relationship with business performance regardless of industry.Goh (2005) measured the IC performance of commercial banks in Malaysia for the period of 2001 to 2003. The result showed that value creation capability of commercial banks in Malaysia is largely attributed to human capital efficiency (HCE). It means that the investment in HC yields a relatively higher return than the investment in physical and SC. The author also demonstrated that foreign banks are more efficient than the domestic banks in Malaysia. However in terms of value creation, domestic banks create more value added than the foreign banks.Abdul Latif and Fauziah (2007) studied the practice of IC management in Malaysian services and manufacturing industries. The sampling frame was drawn from 449 organizations listed on the main board of Bursa Malaysia and the data was obtained through survey questionnaire. The multi-item instrument five-point Likert scale was used to measure the three categories of IC. The result showed that on average Malaysian firms employ IC elements in their business. Their results also showed that companies with positive gap, as opposed to those with negative gap, tend to be higher in the degree of adoption of IC.Hazlina and Zubaidah (2008) employed the correlation test and found that the IC value for companies listed in the Bursa Malaysia Main Board for the year 2005-2006 has significant positive relationship to the firm's profitability. However, this did not happen to the firms listed in the Bursa Malaysia Second Board. On the contrary, they have significant negative relationship between IC performance and firm's productivity. Moreover, there is no significant relationship between IC value and firm's market valuation for companies listed on the Main and Second Boards.Nik Maheran et al. (2009) examined the efficiency level and the trend of IC among 18 financial companies for the year 2002 to 2006. They found that firms' market value have been created more by CE (physical and financial) rather than IC. However, there is no evidence of IC efficiency by years. In terms of relationship between VAIC and their components, IC has positive and significant relationship with HC and SC but not the CE.Besides Malaysia, few studies on IC performance have been conducted in other countries. Pulic and Bornemann (1997) demonstrated the important information about the efficiency of IC of 24 biggest Austrian banks for 1993-1995. They concluded that the rise of efficiency in IC is the simplest, cheapest and most secure way to ensure sustainable success and the most important resource of corporate success. Pulic (2002) employed the VAIC model to measure the IC performance of Croatian banks for the period of 1996-2000. He revealed significant differences in bank ranking based on the efficiency and performance.Firer and William (2003) concluded that business sectors are heavily reliant on IC from a sample of 75 publicly traded firms in South Africa. They indicated that association between the efficiency of value added by a firm's major resource bases and profitability, productivity and market valuation are generally limited and mixed. They also suggested that physical capital remains the most significant underlying resource of corporate performance in South Africa.Mavridis (2004) analyzed 141 Japanese banks from 2000 to 2001. The author focused on the actual status of HC and physical capital (CA) and its impact on the "intellectual" added value-based performance. Significant positive correlation is found between value added and CA. Both CA and HC contributed to the value of best practice index (BPI) in different ways. The best performing banks are those who mainly have very good results in the usage of their IC or HC and less in the usage of their CA.Kujansivu and Lonnqvist (2004) documented that there is a positive correlation between the value of IC and IC efficiency in Finnish companies. However, the value of IC is not correlated with the total efficiency, which is measured by VAIC. Saenz's (2005) results showed a clear positive relationship between HC indicators and market-to-book ratio, but no correlation between HC indicators and banks' efficiency and financial return in Spain. Additionally, the highest correlations are found between the banks' efficiency and financial return and market-to-book ratio.Based on the research by Abdolmohammadi (2005), the effect of IC disclosure on market capitalization of a firm is also investigated. The study was based on a content analysis of 284 corporates' annual reports over the years of 1993-1997 in the USA. The empirical evidence showed that there is a highly significant positive correlation between IC disclosure and market capitalization.Najibullah (2005) suggested that the banks' market value is positively associated with the corporate intellectual ability and its three components i.e. HCE, CE efficiency and structure capital efficiency in Bangladesh. Shiu (2006) showed that the VAIC has significantly positive correlation with the profitability and market valuation but negative correlation with the productivity in Taiwan. The findings suggested that technological industry in Taiwan is capable of transforming intangible assets such as IC to high value added products or services, as claimed by Pulic (2004).Cabrita and Jorge (2005) proved that IC is substantively and significantly related to the organizational performance in the Portuguese banking industry. The study used the original questionnaire developed by Bontis (1997) and data were collected from a sample of 53 affiliated members of the Portuguese Bankers Association. The study also agreed that value is created when IC components interact and the more they are interacting the more value is generated.Kamath (2007) estimated and analyzed the VAIC for measuring the value-based performance of Indian banking sector from 2000 to 2004. The results showed that foreign banks are the top performers in HCE while public sector banks are the top performers in CE efficiency. There are vast differences in the performance of Indian banks in different segments. The author concluded that public sector banks in India seem to have created the huge baggage of a large and inefficient work force, which does not contribute anything to overall value creation.According to the results of Yalama and Coskun (2007), IC is a more important factor than physical capital for banks listed on the Istanbul Stock Exchange Market. Saengchan (2008) investigated the relationship between IC capability and financial performance of commercial banks in Thailand from 2000 to 2007. The study confirmed the existence of IC in the performance of the banking industry in Thailand and concluded that IC should be recognized as one of the major investment for driving the company's sustainable growth.In summary, all literature documented that IC has a positive relationship with the firm's performance. Goh (2005) stopped his study on the descriptive analysis of bank sectors. Hazlina and Zubaidah (2008) only studied the correlation of IC components and profitability for public listed companies. Nik Maheran et al. (2009) only looked at the trend of IC and its impact on the company's value added. The authors extend from Goh (2005), Hazlina and Zubaidah (2008) and Nik Maheran et al. (2009) by adding regression analysis of IC components on the financial institutions' profitability. The main objective of this study is to empirically examine the association between IC and the financial performance. Following Goh (2005), Shiu (2006) and others, this study also uses VAIC as an aggregate measure of corporate intellectual ability. However, the study extends the investigation to a different sector, which is the finance sector of Bursa Malaysia.
Data and methodology: Source of the data
Findings and analysis: Following Shiu (2006), the authors set the minimum value of VAIC as zero because according to the efficiency model, it is not practical to have negative value for VAIC. Table II presents the rankings according to VAIC for each company based on the two categories (bank group and non-bank group). Results in Table II indicate that Maybank has the highest level of efficiency (with VAIC=2.9990) from the bank group. It follows by OSK, Affin Bank and Ambank. This result is consistent with Goh (2005) where Maybank has the second highest level of efficiency after Hong Leong Bank. However, it is contradicted with Nik Maheran et al. (2009) where Affin Bank dominated for the year 2002 to 2004. Nik Maheran et al. (2009) also indicated Maybank with the highest total value added in terms of total corporate value added. Furthermore, the authors found that the average indicator of VAIC for the bank group is 1.7814 suggesting that the Malaysian banking sector has created additional value of RM1.7814 for every Ringgit they invested in IC.Compared to the bank group, the non-bank group has higher or better level of efficiency in IC. The average VAIC for non-bank group is 2.3435. HLGC has the highest efficiency ranking with VAIC of 5.8848 followed by Pacific & Orient and Pacificmas. Out of the 14 non-bank institutions, five of them are above average with the VAIC score between 2.6614 to 5.8848. With regards to the grouping, we conclude that the non-bank institutions have better efficiency level compared to the banks. The possible reason is that the bank group is having high infrastructure costs, high social obligations, huge work force, and low efficiency due to too many branches. However, the result is inconsistent with Nik Maheran et al. (2009) whereby the commercial banks have shown the highest IC efficiency followed by insurance company and security brokerage firm.Figure 1 indicates VAIC of the 20 listed finance institutions throughout the year of 1999-2007. The figure reveals that VAIC improved from 1999 to 2001. This might be due to the announcement of consolidation program for the domestic financial institutions in July 1999 by the Government. The consolidation program represents a major structural enhancement to the banking industry in the country. However, the efficiency and value show deterioration from 2002 to 2004 but have improved after that. This phenomenon suggests that there are redundant resources in these institutions that have not been effectively consolidated or utilized throughout the years of 2002 to 2004 (Goh, 2005). However, the condition has improved after a couple of years of consolidation program. The efficiency level increased 9.45 per cent in 2005, 9.36 per cent in 2006 and 29.17 per cent in 2007. This improvement indicates that the financial institutions have managed their IC properly, accurately and efficiently.Results in Table III indicate that the total efficiency of financial institutions in Malaysia is below the average (2.5493) of 1999-2007 most of the time except for the year 2000, 2001 and 2007. This is contradicted with Nik Maheran et al. (2009), which demonstrated a higher VAIC for commercial banks from year 2002 to 2006. The finding in Table III also shows that the value creation capability of the listed financial institutions is largely attributed to HCE except year 2001, which is on SCE. This might be due to the post merger activities where most of the financial institutions would request to comprise their organizational routines, procedures, systems, cultures and databases. This involves the increment of company's SC. The high value of HCE shows that the investment in HC yields a relatively higher return than the investment in physical and SC. This is consistent with Pulic's (2002) statement that "low HCE is the main cause of low total value creation efficiency". Mavridis (2004) supported this by pointing out the best performing banks is mainly because of the usage of IC effectively and efficiently and less on the usage of CA.Spearman's correlations test is applied to test the relationship between the variables. Our findings in Table IV indicate that HCE (r=0.311, p<0.01), CEE (r=0.385, p<0.01) and VAIC (r=0.312, p<0.01) are significantly and positively correlated with ROA. However, SCE is the only explanatory variables which is negative and not significantly associated with ROA with r=-0.085 (p>0.05). The finding is consistent with Saengchan (2008) where VAIC and CEE have positive and significant relationship with ROA respectively. But, it is contradicted for HCE and SCE in the case of Thailand. The result implies that HC with the prompt assistance of CE can ensure the financial institutions' future growth in Malaysia.Results for the linear multiple regression analysis of ROA on HCE, CEE and SCE are reported in Table V. Multicollinearity test of the three independent variables (HCE, CEE and SCE) has been done. Using a cut-off value of VIF less than 5 (VIF for HCE=1.952, VIF for CEE=1.928 and VIF for SCE=1.022) respectively, no multicollinearity among the variables is found. The coefficient of determination R2 shows that 71.6 per cent of the variance of ROA is explained by the variance of HCE, CEE and SCE. The F-value is statistically significant at the 1 per cent level implying that the regression model is reliable for prediction.The estimated coefficient of correlation (R=0.846) shows a relatively high linear correlation between the independent and dependent variables. The regression result shows that HCE has positive effect on ROA as the estimated coefficient is 10.33 with the confidence level of 99 per cent. In other words, an increase in HCE by one Ringgit, the ROA would increase by RM10.33. The CEE also has positive association with the firm's ROA as the estimated coefficient is 417.731 and significant at 1 per cent level inferring that when CEE increases 1 Ringgit, the ROA would increase RM417.731. The effect of CEE is significantly large in Malaysia if compared to 143.33 in Bangladesh based on the result by Najibullah (2005). The result is consistent with Saengchan (2008) that CEE plays a major role in enhancing the banks' returns in Thailand. In other words, the total assets, both financial and physical assets, have been utilized importantly in generating high value returns.However, the regression also reveals that SCE has a negative effect on ROA but it is not significant. The result is consistent with Shiu (2006) where the independent variables have the directional signs for CEE (+), HCE (+) and SCE (-) associated with profitability. With that, it can be said that VAIC implies efficiency in creating corporate value or financial performance. In other words, the results of VAIC demonstrate that increase in value creation efficiency affects firm's profitability.
Conclusion: As the world is moving into globalization, investors need non-financial disclosure besides the traditional financial measures to assist them in their investment decision-making. In other words, companies need to invest in IC to stand for the gain. The finance function has a key role to play in managing knowledge assets and appreciating the sources of firm's value.The empirical findings from this research clearly reveal that there is a significant positive relationship between VAIC and ROA. The study shows that HCE and CEE have significant positive effect on profitability while SCE has negative effect. With that, the authors conclude that VAIC indicates efficiency in creating corporate value or the extent of corporate intellectual ability. In other words, the VAIC results show that increase in value creation efficiency positively influences the profitability of a firm. Therefore, it is necessary to maximize the utilization of resources, specifically IC for financial institutions in order to maximize the company's profit.In terms of the predicted hypotheses, the results of each component of VAIC, the correlation between the three resources and profitability are consistent with the result of Firer and William (2003) and Shiu (2006). This is an encouraging result as it implies that the management should be able to realize the full potential of an organization's IC to maximize the stakeholder's benefit. This study provides strong empirical evidence that IC is an asset that can be utilized as a vehicle for firm's improvement particularly the profit.As the first study investigating the IC performance on profitability in a more econometrical manner for financial institutions in Malaysia; this paper could be a good start or source of reference for future study on Malaysia's public listed financial institutions. Unfortunately, the study does not cover all companies in the finance sectors due to the unavailability of data. Thus, further study may cover all companies in the sectors or extend to some other sectors or industries. With that, it will provide a more comprehensive and completed report on the efficiency level. Besides that, the authors would like to suggest that researchers consider examining the value creation efficiency towards financial institutions' long term objective in dealing with shareholders' wealth.
|
Total quality index of commercial oyster mushroom Pleurotus sapidus in modified atmosphere packaging
|
[
"Modified atmosphere packaging",
"Oyster mushroom",
"Phylogenetic analysis",
"Pleurotus sapidus",
"Total quality index"
] |
Summarize the following paper into structured abstract.
1. Introduction: In the past ten years, the consumption and production of commercial oyster mushrooms (OMs) have been increasing globally (Jafri et al., 2013). The Pleurotus species are known for having a unique flavour, a good texture profile and for being nutritious (Wan-Mohtar et al., 2018), which increases consumer demand for the product (Sapata et al., 2009). Mushrooms comprise 32.7 per cent crude protein, 2.4 per cent crude fat and 47.7 per cent carbohydrate, thus representing a healthy dietary option (Akbarirad et al., 2013).
2. Materials and methods: 2.1 Source of OM and fruiting body preparation
3. Results and discussion: 3.1 Molecular characteristics of commercial OM for growers
4. Conclusion: The present study has investigated the TQI on harvested OM using different MAP gas mixtures and determined that the HCP retained the best quality of OM. Compared with control and LCP, HCP recorded the highest TPC and showed the highest effectiveness in maintaining the colour and odour quality of OM. Our phylogenetic analysis revealed that the commercially grown OM from Malaysia was P. sapidus strain QDR. The acceptance of sensory panellists on the fresh OM varied for different MAP treatments, and the sensory reports suggested that HCP and LCP were more effective in maintaining the qualities of OM in terms of colour and odour. We believe that our findings provide significant evidence that low-cost HCP is the most ideal and efficient packaging technique for commercial OM.
|
Knowledge retention and aging workforce in the oil and gas industry: a multi perspective study
|
[
"Oil and gas",
"Knowledge retention",
"Old age retiring workers"
] |
Summarize the following paper into structured abstract.
1. Introduction: Knowledge retention has become an important and inevitable activity in organizations these days due to changing demographics and the graying of employees, and there is an inexorable threat to the organizations for knowledge loss when employees leave (Stevens, 2010; Levy, 2011; Jennex, 2014). Knowledge-based organizations use knowledge to generate revenue, and, for this purpose, knowledge workers possess, create and apply knowledge (Nonaka and Takeuchi, 1995) to generate income. Losing these workers means organizations lose the much-needed knowledge which is basis for their competitive advantage. The knowledge of these employees is of key importance, as it may lead to a decay of organizational memory when these employees leave, which in turn may reduce the firm's ability to identify and use past knowledge for competitive advantage (De Massis et al., 2016). The success of the organization depends on the combined capabilities of key individuals to achieve the organizational goals (Petruzzelli and Savino, 2014). Moreover, these employees possess organizational knowledge, knowledge of governance and knowledge of networks and relationships developed over a period within the organization, and this knowledge is the key to enhance and then sustain firm's performance (De Massis et al., 2016). Nerkar (2003) is also of the view that knowledge creation is an evolutionary process spread over time, and combining current knowledge with past knowledge evolved over time enhances the impact of new knowledge. Experienced employees who have been working in organizations for a long time can combine past knowledge and current knowledge to effectively manage the organizational goals. However, if they leave the organization, they will take away with them the knowledge accumulated over time. Researchers have identified retiring workers as key contributors to knowledge loss (Calo, 2008; Stevens, 2010; Ball and Gotsill, 2011), suggesting the application of comprehensive knowledge retention strategies to avoid this knowledge loss (Leibowitz, 2009). They are of the view that there has not been much work done regarding retention of employees' knowledge (Levy, 2011; Durst et al., 2015) and organizations, even when knowing that they are going to lose valuable knowledge due to retirement of employees, do not have formal procedures to handle this pending knowledge loss (Leibowitz, 2009). This study focuses on this aspect of knowledge retention from retiring workers, normally termed baby boomers. Studies (Ball and Gotsill, 2011; Joe et al., 2013; Levy, 2011) have shown that many organizations are going to lose a large number employees due to retirement in the next 5-10 years, thus creating a tremendous knowledge loss.
2. Literature review: In the world at present, the workforce can be categorized into three groups namely X generation, Y Generation and Baby Boomers. According to Yu and Miller (2005), employees born between 1945 and 1964 are termed baby boomers, and they approach retirement in the next 5-10 years. In this article, they will be referred as old age retiring workers to make the understanding clearer. As stated by Johnson (2011), in a survey of Fortune 1,000 companies performed by Ernst & Young, 62 per cent of the employees stated that there will be labor shortage in upcoming years, revealing that proper strategies need to be devised to handle this crisis. Low fertility rates are causing a major shift in the demographics of workforce and competition for skilled labor force will be fierce in the upcoming years (Beechler and Woodward, 2009).
3. Research context and motivation: Certain industries like manufacturing and oil and gas are suffering more with this issue of aging workforce and retirements as new generation isn't stepping up to join these industries because many of them don't want to be away from home and work in harsh environments in case of oil and gas (Ball and Gotsill, 2011) and others don't want to work in dangerous environments and dirty factories even though the remuneration is good (Inkpen and Moffett, 2011). The oil and gas sector will be significantly impacted because of the shortage of technical people in the next five to ten years, as the majority of employees will retire, thus triggering alarms for the success of future projects (Ball and Gotsill, 2011; McKenna et al., 2006). In 2011, Microsoft sponsored the third oil and gas industry collaboration survey, which showed that more than 40 per cent of people are older than 50, and 66 per cent are older than 40 (Figure 1). Moreover, due to fluctuating oil prices, there have also been issues of job stability in this sector (Ball and Gotsill, 2011). These factors intrigued the researchers to conduct research in this area. This article makes significant research contribution by considering factors not studied before as shown below:
4. Methodology: To obtain a deeper understanding of the subject under study, qualitative methods such as interviews are considered more appropriate as compared to quantitative methods such as questionnaires. Therefore, interviews are the most appropriate method when detailed insights are required from individual participants (Bruce and Berg, 2001). This method also provides opportunity for interactive dialogue with the respondents to tap into their front-line experience. For the current research, data have been collected through semi-structured interviews by asking open ended and probing questions to gain a deeper insight of the research topic (Gill et al., 2008). Due to nature of the research questions, scattered knowledge and to cover a global perspective of the oil and gas sector, 20 interviews were conducted to gain practical insights into knowledge retention. The interviewees were selected based on their vast front-line experience and the available contact points of the research team. The participants represented "elite informants" working in key positions and involved in supervising KM activities. Elite interviewing is a common qualitative method with the benefits of yielding insightful information (Marshall and Rossman, 2011). The participants were contacted through emails, LinkedIn profiles and phone calls. Details on companies involved and experience, location and positions of the employees are provided in Tables II and III. The key informants selected were mostly senior persons working in oil and gas sector over a long period. Thirteen participants had more than 10 years of experience in respective organizations with only seven participants having less than 10 years' experience. Moreover, most of them were directly involved in KM initiatives within their organizations. All the participants had key managerial positions with teams working under them. Thus, these participants satisfied the criteria for relevance, experience and knowledge related to the central research question. The duration of each interview was around 50 min. Through the consent of participants, interviews were recorded and transcribed afterwards. Notes were also taken during the interviews and later matched with the transcribed data for the analysis purposes. The interviews were conducted in English apart from two interviews which were conducted in the local language of the researcher. These interviews were transcribed in English, and these transcribed data were then confirmed back from the interviewees. The study adopts grounded theory building approach (Charmaz, 2006) through the analysis of qualitative data. Grounded theory is a well-known qualitative technique; it is a systematic way of inquiry, which focuses on development of codes into categories and then determines the interaction and relationship among different categories to produce a cohesive explanation of the whole phenomena under study. This technique is suitable when there is scarce knowledge available on some topic, and the main aim is to produce some fresh knowledge on that topic through the lens of participants involved in the study; thus, the term grounded theory arises as the outcomes are grounded in data by following a systematic procedure (Strauss and Corbin, 1998).
5. Results: 5.1 Current situation of aging workforce and impact of oil prices
6. Discussion and analysis: The results provide evidence that knowledge loss due to retirement is more acute with companies in the developed countries as compared to companies in developing countries (Beechler and Woodward, 2009). Because of the financial crisis, organizations are focusing on short-term benefits and knowledge loss has probably accelerated (Ball and Gotsill, 2011) in these past 2 years. In these situations, firing the employees all of a sudden minimizes the chances of retaining the knowledge, and in such circumstances, employees are also not willing to share knowledge (Daghfous et al., 2013). It is also evident that employee knowledge is of the least importance for companies when it comes to reducing the costs and budgets of the organizations. Thus, it makes the things work temporarily but has devastating effects in the long run (Calo, 2008). The oil and gas industry is considered the pioneer in KM; yet, factors like costs and budgets make companies do their business in a traditional way and knowledge management initiatives are put aside. In developing countries, there are enough young people to replace the old age workers, and in most cases, there are young people working in all the positions, thus eliminating the knowledge retention and aging workforce issue. However, government-owned organizations tend to be influenced by politics and budget constraints due to which not enough recruitment is done, thereby creating an aging gap in the workforce. This can cause problems when junior people take over responsibility after a senior employee retires. Because of a lack of experience and expertise, they are not able to perform the tasks well enough, causing delays in the operations. Based on this, it is proposed that:
7. Conclusion: Success due to innovation and competitive advantage depends on the skilled human capital of an organization. This study is one of the few empirical studies conducted on the topic of knowledge retention from old age employees, and probably one of the first ones in the oil and gas sector covering companies across different geographical locations, taking into account all the three sectors, namely, upstream, downstream and midstream. Oil and gas is a unique industry in terms of operations and geographical boundaries. This study provides managers and researchers an in-depth insight of the various challenges related to knowledge retention of old age retiring workers and how companies are handling these.
|
Be structured in managing talent: Don't leave sustainable competitive advantage to chance
|
[
"Leadership development",
"Competitive advantage"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: So much of organizational life is structured: even "flat" organizations have hierarchies; once created, the management of knowledge is codified; customer loyalty is dependent upon collated and archived data. Yet, when the future of most businesses is reliant on the acquisition, development and retention of talented people, this essential element of organizational success can so often be haphazard at best. The question that strategists from human resources and more broadly need to consider is "why is the management of talent so often left to chance?"
Seriously addressing competencies: Organizations need to start by seriously addressing the identification and development of competencies.Competency frameworks have been with us for some time now, and rather like old friends, are often overlooked. It is not just that society is hooked on the new, there are genuine reasons for a focus on competencies to appear somewhat dated. Old style approaches may have seemed fine in the good old, static days before change became a way of life - should such a time ever have existed. However, is there any point creating such rigidity when the competitive horizon for the majority of businesses contains so much uncertainty, so much dynamism, so much inherent need to be flexible and adapt? A better question might be "can we afford not to?"Since being identified by McClelland back in 1973, and encompassed within their "core competence" work by Hamel and Prahalad much later, competencies have been embraced, and no more so than in human resources. Definitions vary widely, one definition being "anything employees have that contribute to organizational success." Well if employees have them, and the organization is progressing as a result, they are not something that can be ignored.Employee competencies are what connect the strategic vision of the organization to organizational competitiveness and effectiveness, with the context within which the organization is surviving and hopefully thriving. Desirable employee competencies need to be identified and then consistently implemented. Breaking things down from the conceptual to the operational is the key, with measurable, competency-based criteria at the heart of many successful organizations.Different frameworks have emerged to identify and develop employee competencies. The job-based approach is the most common one, in which competency criteria are developed following an analysis of job requirements. The future-based approach is more unusual, but popular in companies such as Hughes Aircraft and Siemens, as it concentrates on the development of necessary competencies to meet future strategic objectives.What has come into question is the extent to which each of these approaches is viable in a world of competitive chaos. The job-based approach appears static. The future-based approach is only attractive if change can be planned. New models have been needed and indeed have been found. The person-based approach, beloved of Microsoft, consists of identifying the individual attributes that will help the organization the most. The value-based, adopted by Baxter International, involves developing the competencies that support the core values of the organization.How the competency question is addressed is changing, but what is not in doubt is that competency approaches work. Time and effort needs to be extended on them, but the rewards for finding the right framework will follow.
Creating talent development "architecture": Competencies frameworks provide the essential basic building blocks. Smart organizations think "architecture". Even on a global basis, talent is a scarce resource, the fight is on to get the best people, help them realize their potential, and keep them when the competitors come a-calling.The need has never been greater for organizations to develop architecture within which to develop talented people. The four basic steps are:1. having a clear picture of your organization's talent needs over the next several years;2. having the established learning and development pathways that turn raw potential into polished performers;3. having the HR systems and processes that enable potential to be realized as performance; and4. having programs that enable talented people to develop, and therefore enable talented managers to create the talent-rich organization.In management a new 4Ps has emerged - picture, pathways, programs and processes.As organizations struggle to find new ways to compete, there really is little substitute for hard work, but it needs to be inspired hard work. A talent development system will take effort to create and ongoing effort to maintain. It needs to integrate with career management, training and development, succession planning, performance management, compensation and benefits, strategic HR planning and recruitment and selection. Without attention to all of these elements it will not succeed.There are a number of pitfalls to watch out for:* without the personal involvement of the CEO, backed up by a strong HR department, it is unlikely to be successful;* talent needs to be developed for future needs, not past needs;* beware of budget cuts in training and development;* beware of hiring freezes when times are tight;* be brave in identifying and notifying those who are not of high potential and removing them from the program; and* ensure that you target talent at the right level in the organization.Talented people like to work in organizations surrounded by other talented people and in an environment where ability is nurtured. The hard work will have paid off, once your organization is a natural magnet for high potential people.
Talent management at Johns Manville Corporation: One organization that needed to move fast in this area was Johns Manville Corporation (JM), the US building materials company. Quite simply, they would not have been able to achieve their corporate strategy without action. A high performance culture is unlikely to create itself. For JM, the approach was systematic.The first step was to redesign the annual business planning process to ensure ownership of issues by employees and broad involvement in the planning process. New goals set would stretch the organization. For the people involved, they too would be stretched, accelerating their development as leaders.Calibration tools were then incorporated into the talent management process. Managers were given standardized tools to assess talent. This took the need to generate results, identify leadership behavior, spot potential for advancement, and assess how experiential learning had been harnessed by individuals, and compare this against standards.The performance management process allowed for the following assessments:* accomplishments (the individual's results);* behavior relative to the company's "Criteria for Superior Leadership";* strengths and development needs for the current and future roles; and* potential for advancement, billed "Next Moves".The Annual Organizational and People Resource Review is the process in place to match strategies with talent and identify what new talent management initiatives are needed. The Talent Review Board is then in place with a focus on developing internal talent. Its three components are:1. Talent pipelines - or ensuring talent is coming through;2. Talent pools - or ensuring that people are ready to move up the pipeline; and3. Talent review board meeting - which assigns development experiences to people.Johns Manville's approach to managing talent is as holistic as it is systematic - and it works.
It's the structure, stupid!: Good things do happen by chance, but how much of a gambler with your organization's future are you prepared to be?The truth is that structured hard work is behind many organizational success stories. In a post-modern world, it does not grab the headlines. But how many headline grabbers prevail in the long run?Organizations that are willing to do what it takes to fully understand the competencies that drive their business, and have processes to develop the talent that delivers on them, are likely to have the edge in the end.And success breeds success. If you don't believe this, try failure.
Comment: This multiple review article is based upon the following papers."Competencies: alternative frameworks for competitive advantage" by Robert L. Cardy and T. T. Selvarajan in Business Horizons provides a structured and thoughtful discussion of the competency debates. It is authoritative and convincing."Talent development: the architecture of a talent pipeline that works" by Jeffrey Gandz in Ivey Business Journal is a useful, pragmatic paper, even if it may be something of a plug for a talent management program. There are ideas here that can be absorbed and lifted during the coffee break."Tools and dialogue set the stage for talent management at Johns Manville" by Lily Ruppe in Journal of Organizational Excellence is a highly detailed paper, but will be of practical interest to those thinking of doing something similar.
|
Warranty implementation and evaluation: a global firm's case
|
[
"Warranties",
"Warranty policy",
"Warranty management",
"Warranty implementation",
"Global firm",
"High‐tech product",
"Consumer protection"
] |
Summarize the following paper into structured abstract.
Introduction: In order to be effective, a company's service program must set performance standards and benchmarks to assess its service (i.e. warranty) performance on a regular basis (Kleyner, 2010; Vigoroso, 2006; Chu and Chintagunta, 2011; Boyd and Walker, 1990; Tse, 1999). Warranties are a significant influence on a customer's product assessment (Huang and Fang, 2008; Rahman and Chattopadhyay, 2004; Kleyner, 2010; Biswas et al., 2009). Products with limited warranties have consistently received less positive customer evaluations compared to products offering a full warranty (Kerin et al., 2009). Thus, the development and management of a sound warranty policy provides a significant and competitive market advantage for the business firms.In the area of high-tech industry, firms that provide superior support and service to customers can dramatically improve their revenues and customer satisfaction. Yet, survey research shows that only few high-tech business organizations excel in providing superior support and service (i.e. warranty), leaving them vulnerable to competitive advances, clients' dissatisfaction and defections, and loss of revenues (Accenture, 2010).In areas of extended warranties and service contracts, the researchers reported that manufacturers have begun to differentiate themselves from one another (Day and Fox, 1985). The study also showed that the positioning of service contracts and the timing of the offerings are critical decisions. Slotegraaf and Inman (2004) have argued that the factors that affect buyer satisfaction with product quality (including warranty) may shift over time. In addition, Cooper and Ross (1988) showed that there are conditions under which warranty coverage diminishes over time, and ends before the useful life of the product. Specifically, as the life span of the product progresses, the level of coverage shifts the burden of care toward the buyer.Anderson (1973) examined warranty policies designed to provide self protection for the seller. Findings demonstrated that the protective dimensions of the warranty are used by sellers to restrict the amount of risk they assume for future product performance. However, Menezes and Quelch (1990) found that many managers are troubled by the complexities associated with designing and implementing warranty programs. They state that a careful analysis of the situation and a clear identification of the role of warranties in the organization are keys to formulating a sound warranty strategy, which forms the foundation for effective warranty management. Menezes and Quelch (1990) concluded that the use of product warranties as a competitive marketing tool has increased substantially. The expectation is that warranties will continue to be used even more as a strategic marketing advantage. Similar findings were reported in other research works (e.g. Chu and Chintagunta, 2011; Sprague, 2005; Rahman and Chattopadhyay, 2004; Vigoroso, 2006; Tan and Leong, 1999).Although warranty terms may have a positive effect on product sales, tradeoffs exist when a warranty is offered. These tradeoffs may include the costs of product failures, design flaws, and sales promises. Product failure rates must be considered in warranty management, and support should be planned in case of product failure. The support plan should cover the repair philosophy, entitlement, and warranty reimbursement to third party maintainers (TPMs) and outsourcing (Vigoroso, 2006; Deierlein, 2003).Product design flaws also impact the costs of a warranty. When a product is released to the market with a design flaw, it is imperative to promptly send feedback to the engineering department. Since the service department is likely to be the first to discover the flaws, a feedback process must be in place to inform the engineers of a customer's complaint. This requires promoting strong communication between departments in a business firm.The Magnuson-Moss Act of 1975, also known as the "Lemon Law," requires firms to provide written warranties to consumers before they purchase the product (Koku, 2007; Darden and Rao, 1979; Kelley, 1988). This law has strengthened consumer rights within product warranties (Kerin et al., 2009). Promises by salespeople in presentations can be used as a binding, implied, or express warranty. Salespeople should be educated on the Magnuson-Moss Act and must realize that their statements, regarding product capability and safety features, can engage his or her company in costly litigation procedures. Salespeople and marketing executives should also know the warranty laws in the states in which they sell (Sack, 1986; Halstead, 1985).According to Byrne (2004), the 25 largest manufacturers in the USA spend a total of about $15 billion per year on warranty claims. For companies across all industries, warranty claims processing is believed to consume 2.5 to 4.5 percent of revenues. Moreover, the situation is escalating further as new pressures (e.g. increases in warranty coverage) lead too much greater rise in warranty costs (De et al., 2010). Additionally, a survey's findings reported that at most US manufacturing firms, warranty management draws partial attention and support from a multitude of business units within the organization (Vigoroso, 2006). As such, warranty management is often described as a fragmented "chain" of events and process (Vigoroso, 2006). Many companies are rethinking their approach to warranty implementation and management (e.g. Thomann, 2005; AberdeenGroup, 2010). However, the incentives actually go far beyond cost savings. Warranty improvements have been shown to boost revenues, enhance customer satisfaction and loyalty, and increase the quality of the product. Specifically, appropriate management of warranty logistics is needed not only to reduce the warranty servicing cost, but to ensure customer satisfaction, because customer dissatisfaction has a negative impact on sales and revenue (AberdeenGroup, 2010; Vigoroso, 2006; Thomann, 2005; Murthy et al., 2004). Thus, there will be enormous rewards for marketers who manage warranties well. For these reasons, warranties are a significant influence on a buyer's product evaluation and provide a significant and competitive market advantage for the business organizations in the marketplace.However, the reviewed warranty management studies emphasized that business institutions need to develop and implement sound warranty management guidelines and policies to foster improved business performance (e.g. AberdeenGroup, 2010). The contribution of this case research study is its attempt to develop and to empirically assess a general framework for the implementation and evaluation of a warranty policy.
Theoretical background: Warranty theory
Research objectives and propositions: Despite the rich extent of the literature in the warranty management field, the topic is still complex and ambiguous, thus, opening the door for additional research (Chu and Chintagunta, 2011; Sprague, 2005, Rahman and Chattopadhyay, 2004; Karim and Suzuki, 2005; Tan and Leong, 1999). Murthy (2006) emphasized also that warranty management is a topical area that still needs further research focus. In addition to testing the theory, the additional research findings are useful to managers who are rethinking their decision to continue investing in building an efficient and effective warranty management program. Furthermore, they would be of help to business institutions (e.g. high-tech companies) to further enhance their performance (Accenture, 2010).Because the warranty is a cross-functional issue, it is unusual to see inter-departmental conflicts concealed with mistrust. The mistrust and blame-game between different departments and functional groups lead to inefficient way of developing and managing a warranty policy. Moreover, these mistrust and blame-game tendencies are mainly born by the fact that the business organizations do not have an integrated view of warranty management that help collaborate on warranty (De and Kumar, 2007). Researchers have long advocated the adoption of the integrative approach when studying the warranty management process that recognizes the importance of all appropriate departments' interests, and the need to secure their support (De et al., 2010; Sprague, 2005; Vigoroso, 2006). To our knowledge, there is a lack of empirical research studies that focused on warranty process in an integrative/cross-functional fashion. A key contribution of the proposed framework of warranty management is the inclusion of diverse departments into development and management of warranty policy. Specifically, this case research study's key contribution lies in its attempt to address warranty management processes within a multitude of a firm's departments. That is, having a good understanding of the role of warranty in the design, manufacturing and logistic processes are critical and significant steps to the commercial success of a product (Vigoroso, 2006; Kleyner, 2010). It is with this background that this proposed study tries to fill the void in the warranty management literature.The anonymous high tech company was chosen as a sample because the company offered a wide range of products, warranties, and service options. The company also utilized a vast reseller base to sell and service their products. This method offered the potential to gain better insight with regards to the role of resellers in a warranty program. The high tech company also marketed their products and services to six specific industries: financial, retail, transportation, manufacturing, communications, and the public. This broad industrial perspective gave the study added cross-industry insight in reference to implementation and evaluation of a good warranty policy since the high tech company considers these industries to be sustainable in the USA and abroad.Based on a review of extant literature, there is a limited empirical research studies with a focus on warranties. A main reason for the scant empirical research on warranties is the lack of proper data (Chu and Chintagunta, 2011). As stated, the aim of this case study is to develop and to empirically test a general framework for the evaluation and implementation of a warranty policy. The proposed framework will satisfy a current, critical need to provide guidelines with the steps needed to implement and evaluate a warranty policy within a high tech global company. There are recommendations regarding how the warranty framework can be improved. Specifically, in this case research study, three specific areas of the proposed framework (i.e. implementation, support structure, and evaluation) are discussed. The following are the components and sub-components of these areas of the proposed framework and related research propositions.Warranty programs implementation stage
Methodology: Existing studies discovered various practices pertaining to warranty policies. This focus gave more insight to various components and sub-components of the warranty framework, as well as identifying the existing research gap in the reviewed warranty literature. Subsequently, a survey questionnaire was developed to examine and assess the three components of warranty: implementation, support structure, and evaluation. The findings will confirm, refute, or supplement the practices discovered in the literature survey. The focus will attempt to assess the reliability of the framework proposed.Sampling method
Results: Implementation stage of warranty policy
Discussion and implications: The development, implementation, and management of a successful warranty plan pose a major challenge for various firms. This case research study has presented and empirically assessed a framework as the solution to this challenge to develop and employ a sound warranty policy, encompassing all key variables. The model is grounded on existing theory and research. The proposed framework will satisfy a current, critical need to provide guidelines with the steps needed to implement and evaluate a warranty policy within a context of a high tech global company. Thus, this case study was implemented, as an attempt, to fill the void in the warranty management literature.Additionally, this case research study's key theoretical contribution lies in its attempt to address warranty management processes within a multitude of a firm's departments. Furthermore, this model developed here is framed within the context of an integrative approach of warranty management. It identifies also the key factors influencing the success/failure of warranty policy's implementation. This approach is beneficial due to the fact that there are many different aspects to warranty and a proper study of the subject requires a framework to integrate them in an effective manner. It also allows for the study of the warranty process in a comprehensive and integrated manner (for discussion on this topic, see for example, Kleyner, 2010; Vigoroso, 2006; Murthy and Blischke, 1992). This policy leads to customer satisfaction, retention of current customers, attraction of new customers, enhancement of the bottom line of the company (i.e. higher margins), and to the commercial success of a physical good and/or service (Kleyner, 2010; Accenture, 2010; Vigoroso, 2006; Heskett et al., 1994). Therefore, the adoption of the integrative approach of warranty policy's management illustrates also the practical implications of our framework. Moreover, including diverse departments offers a more expansive perspective to a warranty policy and identifies a gap in the warranty management literature. In fact, the necessity to integrate a cross-functions/departmental approach to study warranty management is acknowledged in the existing literature (e.g. Kleyner, 2010; Vigoroso, 2006). Thus, this case research extends prior literature by the implementation of an integrative approach when studying the warranty management process that recognizes the importance of all appropriate departments' interests and the need to secure their support for a successful warranty management and implementation.In addition, the anonymous high tech company was chosen as a sample because the company offered a wide range of products, warranties, and service options. The company also utilized a vast reseller base to sell and service their products. This method offered the potential to gain better insight with regards to the role of resellers in a warranty program. The high tech company also marketed products and services to six specific industries: financial, retail, transportation, manufacturing, communications, and the public. This broad industrial perspective gave the study added cross-industries' insight in reference to implementation and evaluation of a good warranty policy since the high tech company considers these industries to be sustainable industries in the USA and abroad. This is another key contribution of this case study.As stated, the objective of this study is to develop and empirically test a general framework to identify the key factors influencing the success or failure of a warranty plan. The specific areas of the framework include implementation, support structure, and evaluation. The following is a summary of the findings in reference to the components and sub-components of the three areas of the proposed framework as well as the managerial implications.Implementation stage
Conclusion: A firm that sets performance standards as well as implements and evaluates its warranty service performance on a regular basis is a firm that will be effective in the marketplace. This case study empirically tests a general framework for the implementation and evaluation of a warranty policy (i.e. implementation, support structure, and evaluation stages). In summary, the findings for the implementation stage report that the cost centers and profit centers should have their actual costs allocated on the basis of activity. For the support structure, there is a negative response to outsourcing as an option of implementing the warranty policy. These oppositions were mainly consisted of service personnel. On the other hand, those in favor of outsourcing by the firm consisted of product marketing department. For the Evaluation, the findings report that US firms should rethink their pricing, their quality and warranty strategy for domestic and international markets. The proposed framework will satisfy a current, critical need to provide guidelines with the steps needed to implement and evaluate a warranty policy.In order to focus attention on the significance of warranty policy to business institutions, the framework developed here is framed within the context of cross-functions involvement and the well-accepted notion of departmental integration of a warranty management. This framework should encourage firm managers to implement an integrated approach into the development of warranty policy. This should provide the opportunity to advance an organization's performance. Without a cross-functions/integrated approach, business firms are likely to face an inefficient warranty policy and to end-up with a strategy-to-performance gap (see for example, Sprague, 2005; Accenture, 2010). In today's unstable and uncertain economy, a strategy-to-performance gap could mean death to a firm's efforts to develop and implement an efficient and effective warranty policy.
Future research directions and limitations: This case research study faced limitations, as with any other study relying on self-reporting techniques. The study results are not indicative of any causality regarding when a company should or should not go to a third party for warranty service. Future research can expand on this study's findings by determining the reasons for using a third party for warranty service. Such studies could investigate the need for a framework for organizations to use in order to determine when it is best to go to a third party to service their products under warranty, perhaps with a global focus. A trend of great significance has been observed especially in the high-tech industries, to outsource warranty service to a third party. Even though the majority of the firm's personnel were against outsourcing, it is important to note that the management was not.Future research directions can also expand on this case research study by examining how companies balance the cost/quality/warranty ability of the product and how they match it to the customers' price/quality/warranty preferences (for further information on this topic, see Douglas et al., 1993; Rahman and Chattopadhyay, 2004; Chu and Chintagunta, 2011). Another area of study, which deserves evaluation, regards the techniques used to allocate warranty costs. This case study only evaluated one company in the high-tech industry. It seems appropriate to evaluate multiple companies/industries, perhaps with a longitudinal focus. Doing so would allow the observance of the different techniques, and how those techniques relate to their quality policies, as well as look at companies that have adopted total quality management (TQM), and how they handle their warranty costs. In most markets, warranty is a significant component of a firm's quality image (Boyd and Walker, 1990; Murthy, 2006). A firm that sets performance standards as well as implements and evaluates its warranty service performance on a regular basis is a firm that will be effective in the marketplace. Although this is an exploratory study, the findings are relevant to both academicians and practitioners on the national and global levels. This case study hopes to generate interest from researchers on the important and relevant topic of warranty policy. The obtained future findings will further enhance the understanding of warranty management processes.
|
Dialogued-based activation - a new "dispositif"?
|
[
"Social welfare",
"Citizen participation",
"Decentralised control",
"Empowerment",
"Human resource management",
"Social benefits"
] |
Summarize the following paper into structured abstract.
Introduction: In most European countries the so-called individual action plan (IAP) has become a major policy instrument in providing active welfare for social benefit claimants (OECD, 2001, 2007). An IAP is a written contractual-style signed "agreement" between the welfare recipient and the welfare agency, which explicitly accounts for the content and purpose of activation. As such, the IAP outlines the action to be taken by the client and the commitment of the welfare agency. A characteristic feature of the IAP is that it is based on a dialogue (or negotiations) between the client and the case worker/the system. During this dialogue the needs and wishes of the individual in question are to be articulated, which allegedly allow for individualised and tailor-made service provisions in helping individuals to restore their relationship to - and integration in - the labour market. The IAP is grounded in the notion that there are actually two parties that can and will mutually oblige one another through negotiations.In the existing literature it is widely agreed that the IAP is changing the relationship between individuals and institutions. IAP represents a move towards decentralisation, contractualisation, individualisation, new and more democratic forms of governance, etc. (Born and Jensen, 2005; Borghi and van Berkel, 2007; Bredgaard and Larsen, 2008), which allow individuals to become "co-owners" of how their problems are defined and solved, i.e. citizens progressively become responsible for their own lives (e.g. Handler, 2003), which in turn helps to solve social problems more effectively, as user involvement and governance most probably makes social services more effective (e.g. Osborne and Gaebler, 1993). Research, however, also shows that promises and expectations have not been fulfilled. From a "learning-to-labour" perspective the purpose of IAPs and activation has been questioned, as IAPs and activation has not been able to fulfil the goal of inserting people into employment (Lind and Moller, 2006). It has furthermore been argued that the IAP has collapsed from within because social policy discourses on responsibility and obligation seems to undermine tailor-made empowerment (Borghi and van Berkel, 2007: p. 422); or IAPs have failed because of inadequate implementation conditions (Sirovatka et al., 2007); or because case managers have yet not internalised the new conditions (Sol and Westerveld, 2007); or because IAPs only benefit the most qualified beneficiaries (Bovin and Moachon, 2007).This type of critique is based on research questions and research designs evaluating policy intentions and implementation, and provides relevant information, as long as IAP is regarded as an administrative problem-solving technique. The question is, however, whether this approach is too narrow and predictable: when are policy outcomes ever in accordance with intention? By contrast, in this article the aim is to take the analysis of IAP in new directions, which may allow us to pose new types of research questions beyond the logics of industrial society. Thus, the purpose is, theoretically and idealtypically, to reflect upon IAP as a new type of rationality and subject formation. It is argued that IAP accelerates the creation of the self-reflexive decision-maker in late modern society, demanding that individuals become self-entrepreneurs. In this light questions have to be posed from a broader perspective than policy intentions and implementation.The article is sub-divided into three parts. In the first part of the article we make a diagnostic exercise as to how an IAP can be understood. The aim is to search for the basic dynamics and internal features of the IAP dialogue, and our main claim is that the IAP from a Foucauldian perspective may be described as a self-technology. In the second part we analyse the contextual framework of the IAP. It is argued that IAP - and human resource management (HRM) - dialogues increasingly have become prevalent in the post-industrialised society. The IAP dialogue thus represents a general phenomenon in society extending far beyond the domain of social policy. It is a new dispositif, i.e. a rationality reaching far beyond the status of an optional technology. In the third and final part we reflect upon challenges and research questions floating from this generalisation of dialogue-based subjectivation.
Part one: deconstructing the IAP interview: When activation - or the action to be taken - is an out-come of negotiations, the service provisions provided by the welfare state are no longer "externally" guaranteed as universal rights and obligations. By contrast, the provision of benefits depends on the ability and willingness of the welfare recipient to engage in a dialogue, including the ability to construct and present oneself as being one who is worthy of future-oriented investments. In effect, the epicentre for political decision making about the distribution of resources in society has shifted. The IAP interview has become a policy-forming as well as a policy-implementing arena. This contributes to the erosion of the traditional distinction in industrial society between employment and unemployment, or inclusion and exclusion. Instead, new lines of demarcation emerge between those who are willing and able to be their own entrepreneur in a constructive manner and those who are irrelevant, i.e. a person society does not deem necessary to sacrifice resources on and who can be excluded from the welfare state benefits system without further adieu. Refusing to participate in the IAP interview in a constructive manner usually leads to a loss of unemployment or welfare benefits for a shorter or longer period of time. The IAP interview might thus be labelled a "fatal moment" (Giddens, 1991), which involves both opportunities and risks.Logics of the IAP interview
Part two: the context of the IAP logics: Reflexive and dialogue-based IAPs are not a narrow phenomenon practised in the activation industry alone. IAPs have spread horizontally in a viral manner to practically all areas of life. Within the welfare state apparatus the IAP is not confined to social and labour market policies; IAPs are also common features in areas such as, for instance, the school and healthcare system. On the "ordinary" labour market IAP-like dialogues have gained footing as so-called employee-development interviews, while every second programme on television seems to include self-presentation and reality-negotiations of the self. It should therefore hardly be controversial to argue that the form and machinery of the IAP dialogue is generalised. The IAP represents a general shift towards individualisation and system-induced reflexivity, where the self emerges in the continual observation and presentation of the self as a self, and dialogue and contracts are precisely the dominant media in which self-observation and self-presentation might be actualised.In other words, HRM has become a widespread management philosophy, which focuses on the development of the employees, strategies for the individual, career planning, competency development and other relevant tools for the development of the self (Spencer and Spencer, 1991; Beardwell and Holden, 1997). These technologies have found their way into most public and private workplaces, where the personnel are called into annual interviews with the management. In these performance interviews the individual's strong and weak sides are addressed in order to stimulate reflection with respect to relevant ambitions, skills, competencies, and opportunities. In such performance assessments, the language created by the industrialisation and collectivisation ("we would like", "we demand", etc.) has ceased to be useful. Instead, a new idiom is used - the self-strategising language. By using this idiom, the employees can articulate that they, as individuals, have defined as objective to get from A to B in X number of days; and it is in this very plan that they establish the contours of themselves [1].The form and content of these HRM interviews for career plans are homologous to the IAP interview. They are future-oriented dialogues, where dialogue-related power is constituted in the modal form of planning, but where both parties find themselves in situations of mutual dependency. The employer/social worker controls the economic dimension. If the citizen/client/wage earner can or will not, the proverbial cheque book is closed. Conversely, the citizen/client/wage earner possesses an expertise, which in its own right is also a form of power; they are the experts in their own lives and conceptions about the future, which the employer/social worker depends upon if there is to be any contract.Regarded as dialogue, the interaction is nevertheless marked by deep asymmetry. The asymmetry in these interviews is not due to the participants' institutional affiliations alone (civil servant in a bureaucracy vs citizen/client), but due to the self-technology implied in the interview: the development of the interviewees themselves. The one part acts as a "role-in-action" (a manager/social worker), while the other is the actual topic of the interview (that which is being talked about and is to be activated). On the one side sits a system administrator, who can step in and out of the role. On the other side sits a participant (a self-strategist?), who becomes a subject in the moment she deals with herself as an object. This subject can only become the bearer of a role as a client/wage earner on the condition that her expertise about herself is accepted as the basis for the interview.It thus seems as if there are logics recurring in both the ordinary labour market and the "activation market". Both performance interviews and IAPs are about self-reflection. The two areas draw upon the same language and same objectives for the negotiations. Both types of arenas require deliberate and reflexive participation on the social level, and self-reflection and the accompanying self-promotion are necessary requirements. You only become a subject and individual via self-observation and dialogue, and we all can and must be constantly redefined. The sequential individualisation is the machine that gets everything running, while the dialogue, planning and contract represent the technologies. In the background, an ideal of voluntarism, strategic participation and risk-running obviously lurks as a life form, as the topic of the interactions become where, how far and at what price (Giddens, 1991; Sennett, 1998; Beck, 2000; Bauman, 2001, 2003)?On the way to a new dispositif
Reflections and research questions: In this article we have argued that the IAP is an asymmetrical self-technology, which has been generalised. There is - so to speak - a structural homology between the IAP, on the one hand, and HRM dialogues practiced in all corners of society, on the other, which indicates that a new dispositif is at play. New procedural forms of dialogue-based steering and interaction between the client/case worker and employee/employer have been introduced. New types of interaction such as supervision and coaching, for instance, may be perceived as epochal answers to the new form of subjectivation and new conditions for steering, i.e. supervision and coaching are techniques expressing the new dispositif. If it is correct that there is a new dispositif at play, and if it furthermore is correct that the core of this dispositif is entrepreneurial self-creation and self-expression, one might expect new challenges on the operational as well as on the societal level, and we will suggest four challenges and research questions that deserve further scrutiny.A basic challenge to the IAP is the asymmetrical character of the dialogue. Asymmetry, however, cannot be avoided, since it operates on the level of the self-technological dispositif. On one side an "administrator" who continually is forced to co-reflect the possible strategic gap between the other's presentation of self and actual prospects. On the other side an agent who is obliged to be aware of the distance between genuine wishes and the actual context of negotiation. The result is a fragile balance between the "administrator" and client/employee challenging the IAP as a dialogue. An increasing amount of reflexive literature on coaching, supervision and protreptic is trying to circumvent such challenges technically or normatively (e.g. Kirkeby et al., 2008). Objective success criteria constitute a problem, however, if the singularity of the IPA is accepted. Success criteria very much depend on the outcome of the dialogue, i.e. success criteria have become negotiable. Not only is a rich variety of possible outcomes socially acceptable today (as the perpetually ascending career is no longer the absolutely dominant social measure), but also the function of the dialogue itself has turned opaque. The conversation might function as a tool for strategic management, but it might also function as an end in itself, where the presentation of the self is the peak of social participation, a moment of absolute intensity and presence: to participate in the creation of the IPA-society. In this light it would be of interest to study the formation and development of success criteria in the singular interaction?The solution is not the introduction of arbitrary external criteria of success. This would undermine the dialogue itself, as the array of acceptable outcomes has in principle exploded for the self-creator. On the level of interaction the toolbox of the "administrator" is reduced to suggestions and negotiations, not as the only legal tool, but as the only one being acceptable for the participants, as juridical and normative dictates are regarded as a professional defeat. Here the reader might recall the example with the young "rally driver". Rather, the dialogue is a self-technological arena where the following questions are defined and answered: What is work, what is the ambition, what is development, what is responsibility and ascribed to whom, and who am I, etc. etc.? Obviously we are not saying that classification and signification is totally volatile, aux contraire, they will have a tendency towards stability anchored in local circulars and practices of standardisation. The point is, however, that these locale practices challenge a core feature of the IAP interview, and it would be quite interesting to investigate the collision between the internal logics of the IAP and standardisation?From a sociological perspective, inclusion and exclusion change character. Exclusion and marginalisation no longer necessarily follow well-known systemic lines (e.g. one's position in the education system, profession, time working, age and gender). Even the distinction between winners and losers are shifting its foundation. Inclusion and exclusion are now linked together with isolated interactions and actions and to visions about the future and the willingness to accept risk (e.g. how you choose to use the education system). Consequently, exclusion and marginalisation become collectively invisible. Exclusion, marginalisation and social problems are regarded as a function of individual choices and/or planning errors. But such planning errors are not necessarily inevitable, as the unemployed constantly receive new opportunities to present themselves in the social; i.e. individual planning anew in the next IAP interview. In effect one might ask: what is inclusion and exclusion?From a political perspective, one of the consequences of this development is that the collective becomes vulnerable. The vulnerability of the collective is due to the dialectical process in which the IAP, HRM and the new subjectivity form - and are themselves formed - by the individualisation processes in post-industrial (HRM/IAP) society. The growing tendency to individualise through reflexive, self-promoted processes in the form of microscopic spaces for negotiation renders it increasingly difficult to collectivise language and meaning. All statements become polysemic in the sense that their semantic possibilities are not fixed through a general institutionalisation. Instead, meaning crystallises in the very process of negotiation. It is these sociological and political features of the new sociality which increases the capacity of society to absorb exclusion while maintaining integration (Luhmann, 1997; Born and Jensen, 2002). Furthermore, this practice is the core in the new dispositif. It operates exclusively on the level of local and temporary processes, where it functions within a framework of risks and possibilities. The exclusion is a latent threat which conceals itself in the spaces between processes and plans. As political parties, unions, etc. cannot collectivise this phenomenon, one may ask: who can?
Note: 1. Employee development assessments have been introduced in all large companies and almost all public institutions in Denmark. See Holt Larsen et al. (1989) for a description of the language used in these assessments and their purpose. For a Foucauldian critique, see Townley (1999).Opens in a new window.Table I.
|
The governance of vulnerability: regulation, support and social divisions in action
|
[
"Vulnerability",
"Welfare",
"Agency",
"Youth",
"Social control"
] |
Summarize the following paper into structured abstract.
Introduction: Somewhat by stealth, the concept of vulnerability has crept into a raft of contemporary welfare and criminal justice policies and practices. The notion now occupies a relatively uncharted position in long-running debates about who requires or "deserves" support. Where an individual or group is deemed "vulnerable" this carries powerful normative undertones about constrained human agency, potential or actual injustice and a legitimation of resources being deployed. A vulnerability zeitgeist or "spirit of the time" has been traced in contemporary welfare and disciplinary arrangements (Brown, 2014, 2015), which now informs a range of interventions and approaches to social problems, both in the UK and internationally. As prominent examples, "vulnerable" people are legally entitled to "priority need" in English social housing allocations (Carr and Hunter, 2008), vulnerable victims of crime are seen as requiring special responses in the UK criminal justice system (see Hall, 2009; Roulstone et al., 2011), "vulnerable adults" have designated "protections" under British law (Dunn et al., 2008; Clough, 2015) and vulnerable migrants and refugees are increasingly prioritised within international immigration processes (Peroni and Timmer, 2013).
Elucidating the vulnerable citizen: There is a rich literature on how socially precarious or "marginalised" citizens are both regulated and assisted through welfare and disciplinary systems. It is well established that "supportive" provision and its delivery has long been linked with preoccupations and concerns with "problem" behaviour (Levitas, 1998; Young, 1999), from poorhouses and workhouses (Squires, 1990; Fletcher, 2015), to social work interventions (Wootton et al., 1959; Donzelot, 1979), homelessness initiatives (see Kinsella, 2011, this journal), welfare provision (Dwyer, 1998; Dwyer and Wright, 2014) and programmes for "troubled" families (Burney, 2005; Rodger, 2008; Welshman, 2013). Similarly, more explicitly controlling interventions delivered via the modern criminal justice system have elements of support and protection woven into them, especially perhaps in the case of youth justice (Muncie, 2006; Phoenix, 2008, 2009). Contemporary governance mechanisms which elevate notions of "empowerment" have also been as connected with similar processes (Clark, 2005; Adamson, 2010 this journal; Wright, 2016), with care and social control now increasingly intricately enmeshed in systems of governance (see Garland, 2001; Rodger, 2008; Wacquant, 2009). Against this backdrop, that certain citizens are "vulnerable" and require special support and protections has become a commonplace idea in politics, policies, practices and discourses.
The vulnerability study: Findings in the forthcoming part of the paper are drawn from fieldwork undertaken during 2010-2011 for doctoral research which investigated the concept of vulnerability and its use in the care and control of young people. The qualitative empirical element of the research involved a case study in a large UK city (population around 750,000) which explored how vulnerability was operationalised in services for supposedly "vulnerable" young people. In all, 25 young people aged 12-18 were interviewed, with interviewees included on the basis that they were considered "vulnerable" by their workers and that they had extended histories of receiving relatively intensive welfare and/or disciplinary interventions. Around half were male and half female, and a range of different ethnic groups were included in the sample, with seventeen participants being of white UK ethnic origin. Almost all of the young people lived in inner city social housing estates, with the exception of three young people who lived in private rented accommodation. Transgressive young people were deliberately incorporated into the sample; just over half of the young people had offending histories, criminal behaviours, close association with anti-social behaviour (ASB) and/or had been excluded from school.
Findings: regulation, support and social divisions in action: This section provides analysis of practitioners' understandings of vulnerability and then moves on to consider insights from young people themselves. Synergies and tensions across the two groups are considered, with the empirical case study data highlighting the governance of vulnerability as a dynamic process, informed by policy developments and wider beliefs about social problems, and interpreted and modified by interactions between practitioners and young people.
Conclusion: governing vulnerability as a process: Like all governance philosophies, designing and delivering provision based on "vulnerability" has normative implications, which then play out on a day to day basis through the delivery of interventions (see Bevir, 2013, p. 4). Stories about social harm told through vulnerability narratives take multiple forms, and discernible patterns in these indicate that they are reconfiguring and reworking understandings of social injustice and disadvantage in new ways. Persuasions and prescriptions which operate in policy frameworks and narratives centred on vulnerability illuminate how power is not just regulatory but "productive" (cf. Foucault, 1980), with forms of subjectivity and agency in flux and changeable (Bevir, 2013). The governance of vulnerability is a developing strand in this wider process, arguably furthering traditional stereotypes about "problem groups" and reinforcing unequal subjectivities in superficially more palatable ways.
|
Ecologies of interests in social information systems for social benefit
|
[
"Network analysis",
"Knowledge integration",
"Collaboration",
"Case study",
"Community",
"Social computing"
] |
Summarize the following paper into structured abstract.
1. Introduction: The recent explosion of social media applications has led to wide-ranging discussions of how these technologies can be integrated into more traditional information systems. Concurrently, there has been a rise in the use of the term "social information systems" to describe systems that facilitate collaboration through the use of social technologies (Schlagwein et al., 2011). This perspective of social information systems focuses on the capabilities of various technologies in enhancing social activity and collaboration among a community of actors - typically on a digital platform. An alternative perspective of social information systems, however, can be adopted. This focuses attention on the pursuit of social endeavours and illuminates a class of social information systems in which various technologies are applied to address concerns in wider society.
2. Social information systems: All information systems are by definition, social systems in that they involve people communicating and taking action based on meaningful information. Therefore when referring to social information systems it is important to be clear about the meaning ascribed to the word social. "Social" can take several forms as an adjective but two distinct definitions are particularly relevant to defining social information systems.
3. Social information systems, complex problems and policy networks: The objective of this paper is to explicate the features of, and issues surrounding, social information systems which are designed to address social concerns. Social problems such as unemployment, social housing, access to healthcare, the digital divide, etc., are usually wholly or partially the in the domain of public policy and the delivery of social programmes.
4. Methodology and analysis: In order to investigate social information systems for addressing societal concerns, this paper draws on data collected in the course of a study into the role of ICTs in policy networks in an Australian context. The overall study has been ongoing since 2006 and for the purposes of this paper, the data examined focuses on an organisation which was established in order to facilitate internet-based information resources and promote collaboration and information exchange in different aspects of indigenous health. Given the objective of this paper is to explore and characterise this particular class of social information system, the case is considered to be revelatory (Yin, 2009) in that there are relatively few well-established social information systems that are available for observation and analysis. Following a largely interpretivist approach (Walsham, 1995, 2006), the data used to develop the following narrative is drawn from a variety of primary and secondary sources and has been collected over several years as part of an ongoing study. The description and discussion of the case study is presented in three sections.
5. Case study: the Australian Indigenous HealthInfoNet: The Australian Indigenous HealthInfoNet is an internet-based resource which represents the front-end of a research centre located in a university in Western Australia. The mission of HealthInfoNet is "to contribute to improving the health of Australia's Indigenous people and assist in 'closing the gap' (between Indigenous and non-Indigenous Australians) by facilitating the sharing and exchange of relevant, high quality knowledge" (HealthInfoNet, 2013). The Australian Indigenous Healthinfonet acts as a "one-stop" portal to research and information on a range of Australian Indigenous health issues. The roots of Healthinfonet can be traced back to 1981 when the founding director (a medical doctor with a deep interest in indigenous health issues), was appointed to a research fellowship with a statutory body which promoted understanding and knowledge of Australian Indigenous cultures. In this role, Dr Thompson recognised that the knowledge base in relation to indigenous health was fragmented, inaccessible and inappropriate for the individual communities who needed the knowledge to take action at a local level (Thompson, 2005). In 1997 Dr Thompson developed HealthInfoNet based around the key tasks of; translation research (involving primary data collection and analysis, and the synthesis of a wide variety of data and other information obtained from academic, professional, government and other sources); and the dissemination and exchange of information. With the development of the internet capabilities in the 1990s, the original information "clearinghouse" functions of the centre, evolved towards the current platform. In addition to providing access to research reports and data on a wide range of health issues from heart disease and diabetes through to road safety, HealthInfoNet supports online discussions through various "yarning place" forums and chat rooms, and integrates Twitter and Facebook accounts.
6. Discussion: HealthInfoNet can be considered a social information system both in terms of its use of social and collaborative technologies, and the underlying principles and objectives of ameliorating the complex social problems associated with the health of Indigenous Australians. IRS is only one of over 60 indigenous health topics supported by HealthInfoNet. While some other topics directly related to indigenous health are widely used in their respective networks, HealthInfoNet did not become fully integrated into the underlying sub-networks of IRS.
7. Conclusion: This paper has investigated a class of social information systems which pursue objectives associated with societal benefit. It is proposed that such systems often deal with complex social problems and need to be integrated within the wider public policy networks associated with that problem. Through the description of the Australian Indigenous HealthInfoNet case and analysis of the IRS policy network, it was illustrated that policy networks are comprised of many actors engaged in different forms of activity. Interactions among actors in the policy domain are seen to cluster around sub-networks of activity which have different priorities and interests and pursue different objectives. The context into which the social information system is integrated is therefore not homogeneous and the policy network can be viewed as an ecology of interests.
|
Probing the progress of the external dimension of the Bologna process
|
[
"Cooperation",
"Harmonization",
"Reform",
"Internationalization",
"Convergence",
"Bologna"
] |
Summarize the following paper into structured abstract.
Introduction: Globalization of knowledge brought about a fundamental reconsideration of Europe's traditional systems of higher education through the signing of the 1999 European Bologna Process higher education reform. The attractiveness of this reform, 20 years after its launching, is seen in the significant growth in its membership - from 29 countries in 1999 to about 48 countries today, including Russia (EHEA, 2018). Trade in higher education services has become so strong that even countries not involved in reform need to be aware of what is happening because of the inevitability of collaboration among educational systems and institutions in the global education marketplace. According to Altbach (2001), higher education is increasingly seen as a commercial product to be bought and sold like any other commodity. Countries involved in such trade prefer commodities that are easily comparable; hence, the current global drive toward the Bologna model of higher education reform and harmonization in Europe, Africa, Central Asia and Asia-Pacific and Latin America.
Conclusion: The perspective that the ulterior motive of the Bologna Process external dimension was to promote European hegemony remains debatable. Notwithstanding, the reform has strengthened socioeconomic partnerships between European countries and between Europe, Africa, Latin America and the Caribbean, Asia and Asia-Pacific. These partnerships are evidenced by the adoption of the Bologna models of education reform in these regions as well as in the European Union's funding of exchange programs between European universities and universities outside Europe. The spread of the reform has not only been along ex-colonial lines, as suggested by some research (MacGregor, 2008). Its influence has been worldwide - in regions where Europe's socio-political influence is not impactful; the Bologna Process has served as a model for regional integration (Clark, 2014). The internationalization of higher education has gained new momentum since 1999, thanks to the undeniable influence of the Bologna Process. The wide-ranging impact of the BP reform discussed in this paper seems to indicate that there is no end in sight yet regarding its influence. Scott (2009) argues that the global relevance of the Bologna Process will continue to grow with time. Clark (2014) analyzes the influence of the Bologna Process while underscoring its role as the model of regional integration in Africa, Asia and Latin America.
|
A pilot whole-school intervention to improve school ethos and reduce substance use
|
[
"Schools",
"Substance misuse",
"Social inclusion",
"Australia",
"United States of America"
] |
Summarize the following paper into structured abstract.
Introduction: Rates of tobacco, alcohol and illicit drug use (henceforth termed substance use, SU) among UK young people are among the highest in Europe (Hibbel et al., 2004; NatCen and NfER, 2007). While many young people experiment with substances, frequent/early SU strongly predicts harmful use in adulthood (Fergusson et al., 2003; Riggs et al., 2007; Viner and Taylor, 2007). School-based preventive education is now common (Ofsted, 2005) but reviews suggest effects are small and not sustained (Faggiano et al., 2005; Foxcroft et al., 2005; Thomas and Perera, 2006). Other approaches are possible. A sense of inclusion in and positive attitude to school, and engagement with education are protective factors against early use (Fletcher et al., 2008). The health-promoting schools movement has called for schools to become more inclusive, supportive environments, marking a shift from schools as sites for health education to viewing schools as settings that can influence health (Young and Williams, 1989; McCall et al., 2005; Bonell et al., 2007).Early studies of interventions to modify school settings in order to reduce SU were inconclusive because of variation in the completeness of implementation and weak evaluation design (Gottfredson, 1986; Gottfredson et al., 1996; Battistich et al., 1996). More recent studies report significant effects on SU. The US "Aban Aya" project aimed to increase social inclusion by "rebuilding the village" in schools largely serving African-American communities, informed by theory that increasing social ties and cultural pride in schools could reduce rates of SU and other problem behaviours. Its whole-school component involved a standardized process, convening an action group in each school comprising students, staff and others to review policies and undertake actions to promote an inclusive school climate. The trial of this intervention comprised three arms: a whole-school plus social-skills curriculum; a curriculum-only arm, and no intervention. A primary analysis that compared the intervention arms with no intervention, reported a substantial (34 per cent) reduction in boys' SU (p<0.05). A secondary analysis reported that the arm combining the whole-school/curriculum elements was more effective than the curriculum-only arm in reducing a composite behavioural risk measure, suggesting the whole-school component was an "active ingredient" (Flay et al., 2004).Informed by attachment theory, the Australian "Gatehouse Project" aimed to impact on SU and other health outcomes by promoting students' security, positive self-regard and communication with staff and other students. It was delivered in a range of school types and neighbourhoods in the state of Victoria. Again, this intervention convened an action-team in each school to review policies and plan actions, facilitated by an external "critical friend" and informed by data from a student survey. It also delivered a social/emotional-skills curriculum. The Gatehouse project reported consistent reductions in various measures of SU (Bond et al., 2004a, b; Patton et al., 2006). These studies demonstrate the potential interventions promoting a positive school ethos have for preventing SU.Process evaluation of the Gatehouse project found the various components (needs-survey, action-team, critical friend) functioned synergistically, and although specific actions varied between schools, these were well completed. Implementation was facilitated by supportive management and broad participation (Bond et al., 2001; Glover and Butler, 2004). However, this evaluation did not attempt to assess systematically how completeness of implementation might have been influenced by schools' baseline social climate or "ethos", i.e. the contextual characteristics specific to the school that distinguish it from other schools (Rutter et al., 1979; Gittelsohn et al., 2003).While many young people experiment with SU with little or no associated harm and for reasons that have nothing to do with their schooling, research in English schools (Fletcher et al., 2009) suggests three pathways whereby schools may inadvertently engender early and frequent SU among some students: schools with lower levels of student and parent engagement have more disengaged students who use substances as alternative status markers; unsafe schools lead to students using substances to facilitate protective friendships with substance-using peers; and unsupportive schools lead to increased numbers of anxious and unsupported students who use substances to self-medicate. By enhancing, respectively, social ties and pride, and security, communication and self-regard, the Aban Aya and Gatehouse projects might have worked by moderating these pathways. However, it cannot be assumed that these interventions are feasible or acceptable let alone effective in England, given cultural and policy differences. Feasibility is a particular concern because English schools experience pressures relating to government-set exam attainment targets, local league tables of attainment and regular external inspections which may constrain schools' abilities to engage in complex health-promotion interventions. Fostering a positive school environment is required within the National Healthy Schools Programme (DH and DfES, 2005) but there is no evidence-based guidance on how to achieve this (Warwick et al., 2004).We piloted "Healthy School Ethos" (HSE) a whole-school intervention strongly informed by the theories of change/processes used in the Gatehouse and Aban Aya projects as well as the research on pathways to SU in English schools. As recommended for the evaluation of complex interventions we took a phased approach to evaluation (Craig et al., 2008). In this initial phase of evaluation, we aimed to examine:* whether the intervention was feasible and acceptable in English schools;* the influence of schools' baseline ethos on implementation; and* awareness of the intervention throughout the school.Because of our formative focus and small sample, we did not aim to examine students' outcomes but intend to examine these in a subsequent phase.We piloted our intervention in two schools in the 2007/2008 academic-year. Using a structured process modelled closely on the Gatehouse project, we aimed to enable each school to carry out locally determined actions to increase students' security, positive self-regard and communication with staff and students. We provided various standard inputs (an external facilitator and guidance manual, a student-needs survey, and training) and asked schools to undertake a standardized process involving convening action-teams to meet ten times through the year to determine priorities for action and ensure delivery. At the funder's request, we required schools to engage in some pre-set actions (develop agreed rules for appropriate conduct; review policies on bullying and feedback to students; provide one-to-one pastoral care; facilitate events and displays). While staff training aimed to improve classroom management and inclusivity, our intervention, unlike the Gatehouse and Aban Aya interventions, did not involve a specific social/emotional-skills curriculum because of our specific interest in evaluating a whole-school approach rather than individual curriculum components. We provided PS4,000 core funding per school plus PS5,000 responsive funding with bids being judged by the research team.
Methods: We employed a case-study design nested within a "mini" matched-pair randomised trial. Our facilitator contacted 174 schools, of which 20 agreed to participate in principle. From these we identified two pairs of schools, each matched on: being rated as "satisfactory" or "good" by the national school inspectorate; and high or low proportions of black/minority ethnic students and students receiving free meals. From each pair, we randomly allocated one to the intervention arm and one to the comparison. Soon after random allocation, our "satisfactory" intervention school opted not to participate after doing unexpectedly poorly in the previous summer's examinations. Since this was a pilot process study rather than a phase-III trial, we re-allocated the dropout school to the comparison arm and the original comparison school to receive the intervention.We thus retained two "satisfactory" and two "good" schools, one of each in the intervention and comparison "arms", of varying social and ethnic composition, allowing comparison of schools with different baseline ethos and student profiles. Our "satisfactory" school, "Woodbridge" (a pseudonym), is a "community school" where the local authority oversees the school's management. The student intake is 210 per year, 21 per cent of who receive free school meals and 45 per cent are of black or other "minority" ethnicity. Most teaching is in mixed-ability groups. Our "good" school, "Hillside" (another pseudonym), is a "foundation" school managed by a board of governors. The school's intake is 190 per year with 7 per cent receiving free meals and 3.5 per cent being of Black/"minority" ethnicity. Students are streamed by ability in most academic subjects from year-7.We undertook pre- and post-intervention surveys of year-7 students, ages 11/12, to examine awareness of our intervention actions. Questionnaires were piloted with young people aged between 11 and 12 from other schools. Surveys were conducted in private in classrooms with support from two fieldworkers. Across all four schools, 798 students were registered of who 605 (75.8 per cent) consented to complete baselines and 721 (90.4 per cent) follow-ups. This increase in numbers reflected fieldwork modifications (better explanations, teachers present in classrooms but not able to read responses).We also undertook in-depth qualitative research to examine feasibility, acceptability, awareness and contextual influences. We undertook semi-structured interviews with each head-teacher, the external facilitator and our two trainers, as well as with a sub-set of action-team members in each school sampled purposively by role. In Woodbridge, we interviewed three senior staff, one junior staff and one student, and in Hillside, we interviewed one senior staff, two junior staff and two students. We interviewed two staff per school participating in our training, including one experienced and one inexperienced staff-member. Additionally, we interviewed three Woodbridge students and five Hillside students participating in other intervention actions, and 17 students in each school not participating in specific actions, purposively recruiting young men and women. Those in Woodbridge were drawn from a range of ethnic groups while those in Hillside were white, reflecting the composition of each school. All interviews, were conducted by researchers, on-site, in school hours, in private rooms. Interviews used guides with themes and probes and discussed the participants' experiences of the school, HSE's aims and objectives, and their experiences and views of the process of implementation. Informed by educational literature on school ethos (Rutter et al., 1979; McMurtry, 2005), we took this to refer to schools' distinctive culture, priorities and beliefs. In exploring this empirically, we used open-ended questions to explore how participants' perceived their schools and how this may have affected delivery of the intervention. Interviews with intervention providers and staff lasted 45-90 minutes while interviews with students averaged 30 minutes. All were audio-recorded and transcribed in full.We also undertook unstructured observations of various meetings to examine processes of participation and enable triangulation with interview accounts. At Woodbridge we observed the initial meeting between the intervention facilitator and the school's senior leadership team, four action-team meetings and a training session. At Hillside, we observed the initial meeting plus six action-team meetings. Observational data comprised notes written during observations, sometimes augmented later that day from memory.We analysed survey data using Stata, summarising the proportion of students reporting awareness of various policies and actions in intervention and comparison schools. We produced crude and adjusted odds ratios (OR) to assess these differences, overall and among sub-groups (gender, baseline attitude to school) where overall OR were significant (p<0.05). All analyses adjusted for clustering except where small samples in some sub-group analysis did not allow this, and multivariate analyses adjusted for ethnicity and socioeconomic status, plus gender and baseline attitude to school when not stratifying for these.We undertook a thematic content analysis of qualitative data using NVivo. One researcher coded interview transcripts and observation notes primarily aiming to use an inductive framework based on our research questions, while another analysed transcripts primarily using codes deductively grounded in the data. Both drafted memos to explain and link codes. The two researchers then compared their analyses, refined codes and generated additional memos in discussion, each then coding data a second time, both now inductively and deductively. In the course of analysis, data, codes and memos were compared to identify cases of disagreement or where emerging interpretations did not match particular data. Where this was the case, codes were refined and/or differences reported. Key themes emerging from the analysis included the flexibility of the intervention, its feasibility particularly of locally determined actions, the broad similarity in the range of actions undertaken in each school, the importance of fit with various aspects of baseline school ethos in ensuring feasibility, the importance of key individuals with capacity to deliver, and the value of student participation (italicised in our results).All research participants took part on the basis of their informed, written consent. In addition, students' parents were sent a letter explaining the study and enabling them to withdraw their child from this, as indicated. All data were stored anonymously and securely with codes linking data. Funding came from the UK Medical Research Council, and the study was approved by the research ethics committees of the London School of Hygiene, and Tropical Medicine, and the Institute of Education, University of London.
Results: In this section we first report on schools' baseline context and motivation to participate. This provides important background for our subsequent reports of how each intervention component (or "input") was delivered in each school, how schools convened action teams and how these worked, and how actions were planned, delivered and received in each school.Schools' baseline context
Discussion: Summary of findings
|
Managing product returns for reverse logistics
|
[
"Distribution management",
"Returns",
"India",
"Reverse scheduling"
] |
Summarize the following paper into structured abstract.
Introduction: Effective and efficient management of product returns is an intriguing practical and research question. Growing green concerns and advancement of reverse logistics (RL) concepts and practices make it all the more relevant. Three drivers (economic, regulatory and consumer pressure) drive product returns worldwide. This has also gained momentum because of fierce global competitiveness, heightened customer expectations, pressures on profitability and superior supply chain performance. Concerns about environmental issues, sustainable development and legal regulations have made organizations responsive to RL. Increased competition, growing markets and a large base of product users in developing countries imply that buyers are getting more power in the supply chain even in these countries. Thus, managing product returns in an effective and cost-efficient manner is of increasing interest in business as well as in research. It leads to profits and at the same time increased customer service levels and higher customer retention.Consumers expect to trade in an old product when they buy a new one. Different products may be returned at different stages of their life-cycles. They may go for remanufacturing, repair, reconfiguration, and recycling as per the most appropriate disposition decision. This creates profitable research and business opportunities. Consequently, original equipment manufacturers (OEMs) are expected to undertake RL activities in an effective and efficient manner. They may do so independently or by outsourcing. Estimation of returns is a pre-requisite for establishment of an effective and efficient RL network and hence becomes very crucial in this context.RL issues are mainly regulatory-driven in Europe; profit-driven in North America and in incipient stage in other parts of the world, including India, where both consumer awareness and globalization are likely to lead to greater economic, consumer and regulatory pressures in the coming future. Society in general and particularly in Indian context is still price sensitive and to a little extent quality sensitive (quality for a given price) but not environment sensitive in its buying and promotion behavior. Lack of incentives/disincentives from regulatory authorities and lack of pressure from prospective customers and consumers on the manufacturers/service providers is inhibiting these initiatives in India. Therefore, RL has not received the desired attention and is generally carried out by the unorganized sector for some recyclable materials such as paper and aluminum.Only recently, some companies in consumer durables' and automobile sectors have introduced exchange offers to tap customers who already own such products. Presently, these returns are either resold directly or after repair and refurbishment by firm franchisee/local remanufacturers in the seconds' market. They are not remanufactured or upgraded by OEMs. In fact, present work is motivated by increasing sales potential of white goods/brown goods and automobiles and the good response exchange offers by OEMs or retailers have generated so far.Our study covers different categories of products (Table I) covering a spectrum from cellular handsets and personal computers (low volumes and growing markets) to black and white televisions (high volumes and declining markets). We cover television sets, passenger cars, refrigerators, washing machines, cellular handsets and personal computers. The cumulative annual growth rates (CAGR) in the Table I are for past ten years sales and for next ten years projected demand.Our methodology consists of a brief literature review wherein we find out some significant issues and gaps as well as challenges and opportunities in the area of reverse logistics network design (RLND), especially in context of estimating and managing product returns. This is followed by conceptual model development in practical settings. The problem being intrinsically complex, the broad solution approach is to partition it into a main model with spot decisions and parameter estimations by various sub-models using appropriate tools and techniques. To actualize this, we develop an integrated modeling framework for an effective and efficient RLND. We conduct informal interviews with 84 concerned stakeholders (28 dealers, 12 distributors and 44 consumers in North India for our select category of products) to gauge and capture real life practices and requirements and to estimate various costs and operations related parameters as well as the maximum distance of the collection center locations from prospective return sources. Further, existing literature sources, company web sites (OEMs for our relevant category of products) and other web sources are also utilized as additional inputs to estimate certain parameters such as range of resolution prices for returns, number of product grades, costs and available capacities of facilities, average salvage rates, etc. Finally, we carry out some preliminary experimentation and analysis to draw a few insights and to find out scope for future work.
Literature review: Figure 1 shows the basic flow diagram of RL activities. The complexity of operations and the value recovered increase from bottom-left to top-right in the figure.The pattern of quantity, quality and time of arrival of returns is of paramount importance in RLND. The location of facilities relative to process inputs, customer markets or waste disposal locations has been considered both analytically and empirically in literature (Schmenner, 1982; Brandeau and Chin, 1989; Appa and Giannikos, 1994; Giannikos, 1998; Pushchak and Rocha, 1998).Collection is the first and a very important stage in the recovery process, where product types are selected and products are located, collected, and, if required, transported to facilities for rework and remanufacturing. Used products originate from multiple sources and are brought to a product recovery facility, resulting in a converging process. Cairncross (1992) suggests classifying schemes for collection based on whether the initial transport is performed by the consumer (i.e. bring schemes) or by a waste manager (i.e. kerbside collection).Inspection/Sorting may be carried out either at the point/time of collection itself or afterwards (at collection points or at rework facilities). Collected items generally need sorting. Inspection/sorting illustrates the need for skill in the sorting of used products (Ferrer and Whybark, 2000). This may or may not be combined with pre-processing. Jahre (1995) found that the converse to sorting complexity is collection complexity.Pre-processing may be in the form of sorting, segregation, partial or complete disassembly or minor repair and refurbishing activities. It may be carried out either at collection centers or at rework facility depending upon the technological and economic factors. Louwers et al. (1999) discuss it in detail while developing a facility location allocation model for reusing carpet materials. They include the operational costs related to energy, labor, maintenance costs and the loss of interest related to the facilities.Location and Distribution (Network Design) is the most important and critical area of RL that is assuming greater importance day by day. In many cases, recovery networks are not set up independently "from scratch" but are intertwined with existing logistics structures. In particular, this is true if the OEM recovers products. Location and configuration of facilities frequently affect and are affected by the external natural environment, mainly the estimated returns.Capacity decisions in general aim at providing the right amount of capacity (i.e. how much) at the right place (i.e. facilities location) and at the right time (i.e. when). Long-range capacity is determined by the size of the physical facilities that are built (Schroeder, 1993). In general, facility decisions are affected by estimated returns (assuming infinite markets), costs, competitors' behavior and other strategic and operational considerations. Operations strategies that entail the installation of new capacity also become more complex as regulatory and consumer demands for returnable/recyclable products increase.Bellman and Khare (1999, 2000) develop the concept of "critical mass" of returns for profitable remanufacturing/recycling. They argue that the efficiency of RL could be improved by ensuring that product design takes into account the requirements of post-use/post-consumption collection, sorting and recycling.Research issues and gaps
Conceptual model: A three-echelon (consumers' returns-collection centers-rework sites) multi-period model designed for product buy-back (generally for exchange offer) is conceptualized as shown in Figure 2. We assume a "bring scheme," i.e. the customers bring the product to collection/buy-back center (generally in a given time-window known a priori by telephone/internet). The company makes the decision about allocation of customers to collection centers. They receive resolution (refund, cash, exchange offer, etc.) if the return is accepted. There is no take-back obligation. Testing facility and clear-cut return product valuation charts are available at all collection centers. Testing time is negligible and customers are not charged for it. Manpower is skilled for inspection, scanning, sorting and resolution decision. Recovery strategies and costs for various categories of products are known a priori.For simplicity, we restrict the choice for a collection center to the existing distribution/retail outlets, some or all of which may act as prospective facility location. Further, the differentiated complexity of operations leads to two distinct rework sites: repair and refurbishing centers and remanufacturing centers. Repair and refurbishing centers require lower capital investment, are more skill-based and generally repair/refurbish goods in order to make them almost as good as new. Remanufacturing centers require very high capital investment, are more technology-based and produce upgraded remanufactured goods. The rework facilities will come up at some or all of collection centers, i.e. some locations will have only collection centers, some will have collection centers with either of the two rework facilities, while some may have all the three co-located. Co-locating facilities is preferred as this leads to some savings in capital and manpower investment as well as transportation costs. The disposition decisions are guided by profit motive and all the returned goods are resold in primary or seconds' market after necessary disposition. The first disposition (sell directly without rework) is carried out at collection centers themselves, as this involves no substantial investment. Balance returns go to rework sites as per disposition decisions.We assume that various costs, distances, processing times, input parameters and conversion factors (including salvage rates) associated with the activities are known or have been estimated a priori. Prices of various products and modules in primary and seconds' markets in a particular time interval are also known or estimated a priori. There is infinite storage capacity at each facility. Transportation times are negligible in comparison to a single time-period. Different grades of product deteriorate at a fixed rate with time. Inventory is carried to the next period.Definitions of various terms used in the model
The integrated modeling framework: For an effective and efficient returns management based on the conceptual model, we develop an integrated modeling framework as shown in Figure 3. It estimates returns and determines location, disposition, capacity and flow decisions for our conceptual RL network through a set of hierarchical models under various scenarios. Our integrated framework also introduces penalty for inventory deterioration and obsolescence and measures capacities in terms of total processing times. It combines descriptive modeling with optimization techniques and covers costs and operations activities across a wide domain.First, we develop a system dynamics sub-model for estimating return flows over a period of time at various candidate collection center locations based on products-in-use, average life cycle of products and forecasted demand. We also consider impacts of environmental protection policy index (EPPI) and green image and utility factor (GIUF). Next, we use a simple optimization model using certain strategic and customer service related constraints to determine the collection center locations. It is an investment-minimizing model based on certain strategic and customer service level constraints. We use notional per unit transportation costs for this since the actual costs are to be borne by those bringing the returns. These are, therefore, lower than actual and are shown later. This model also calculates the estimated returns at these locations. Further, the open sites at a particular point of time act as rigid constraints in the main model for opening rework facilities.The main optimization model determines the disposition decisions; location and capacity addition decisions for rework sites (remanufacturing centers and repair and refurbishing centers) at different time periods as well as the flows to them from collection centers. The framework allows experimentation under various scenarios for different categories of products. The insights and learning provided by these results to various stakeholders and decision makers can be utilized for decision making.
Data collection and estimation of returns: For real-life application of the proposed framework, the input data may be classified into two groups:1. Returns data which include the types and the time-varying amount associated with each type of returned product.2. Operations and cost related parameters such as costs of facilities, capacity block sizes, processing times, penalty rates for inventory deterioration, fraction recovery rates, average number of recoverable modules, storage costs, processing costs, transportation distances, transportation costs, procurement costs, resale prices and so on.Many of these have high variances.Forecasting techniques are mainly dependent on historical data for the underlying process or similar process. The existing literature groups quantitative techniques into two categories, time series and causal analysis (Jeong et al., 2002). Further, in case of returns, take-back rates are either estimated as percentage of sales under different scenarios (Shih, 2001) using or estimated by distribution models (Jayaraman et al., 1999). Further, Marx-Gomez et al. (2002) use neuro-fuzzy approach to forecast returns of scrapped products whereas Jeong et al. (2002) device a computerized causal forecasting system using genetic algorithm. Both these works use historical data. Most papers (Kiesmuller and van der Laan, 2001; Vlachos and Dekker, 2003; Mostard et al., 2005) consider random returns dependent explicitly on demand, whereas Sheu et al. (2005) assume time-varying quantity of product-returns controllable.We neither consider returns explicitly dependent on demand nor do we use any approach that explicitly needs historical data of returns. Instead, we develop a causal system dynamics model that associates returns with number of products-in-use, estimated demand and PLC. It also considers impact of environmental protection policies and "green index and utility factor." We estimate most of relevant parameters through informal interviews with concerned stakeholders due to unavailability of historical data.Estimation of products-in-use
Conclusions and scope for future work: Literature that covers both the remanufacturing and RL in an integrated manner is few and far between. We provide a framework that covers a wide domain of activities ranging from estimation of returns at different locations at different time-periods to their actual collection and disposition till modular stage. It has been implemented in the form of a streamlined integrated multiple time-period model that takes care of statutory requirements and consumer preferences and simultaneously respects strategic and operational constraints for optimizing profits. Standard software packages, decomposition methods and heuristics have been utilized for solution.Klausner and Hendrickson (2000) describe product returns through third party logistics providers. They suggest that buy-back would be a better option. Fleischmann et al. (2001) also suggest that buy-back may lead to higher rates of returns and thereby lead to economies of scale. Recently, Jayraman et al. (2003) have used resolution to customers in their model. Our framework too considers resolution price. Besides, it also considers sale of recovered modules. This is a step further to the consideration of revenue from sale of reclaimed material (Shih, 2001).We also use customer service-related constraints in our model for collection center opening decision from a given set of candidate locations. These take care of customer convenience by stipulating the maximum distance for carrying the returns. Bloemhof-Ruwaard et al. (1996) and Hirsch et al. (1998) have used such types of constraints earlier, but in slightly different contexts. Recently, Krikke et al. (2003) have used similar constraints. Guide and Pentico (2003) propose a framework for re-manufacturing using a closed-loop hierarchical model to aid in the designing, planning and controlling of logistics and related activities. This allows financial incentives to control product returns. That way timing, quantity and product quality as well as associated logistics functions become more predictable. We consider product returns uncontrollable, but at the same time estimate them. We consider average inventory for holding costs unlike the prevalent literature practice of using end-of-the-period inventory. We also use discount factor to optimize the net present value for objective functions in our MILP models.Our study shows that the impact of quality, quantity and timing of returns on the overall RLND and profits are significant and hence the estimation of returns is important. EPPI and GIUF directly impact returns in ratio of EPPI*(1+-GIUF), the ratios of pessimistic: most likely: optimistic scenarios are found to be of the order of 0.15:1:2 approximately. Thus, we agree with Listes and Dekker (2005) that data assumptions have direct implications on the construction of the underlying scenario.The government policies and consumer behavior impact returns and thereby, RLND a great deal. These should be analyzed and modeled carefully. Industry should work to increase product recyclability, develop Life-cycle-analysis capabilities and improve communication among its segments. Efforts should be undertaken to strengthen and expand industry coalitions and link with third party providers. The existing infrastructure needs expansion, policy makers and citizens need education and there is a need to extend producer responsibility. We need to replace manufacturing, focused on use of virgin materials, by a new holistic approach that unleashes synergy between economic development and the environment.Our work has a few limitations. There is uncertainty of system parameters due to lack of actual historical data. Uncertain economies of scale and undeveloped/underdeveloped markets too limit the applicability. We deal with supply side (returns) and returns' disposition but do not consider in any detail the co-ordination of the two markets. We still follow a PUSH system where the volumes of returns drive the decisions and do not consider controlling product returns. We also do not consider promotion of goods in exchange offers explicitly. A recent paper (Savaskan et al., 2004) considers many of these issues explicitly, assuming closed loop supply structures as given. Our present work is more or less complementary to this paper.We have carried out estimations and optimization for product categories and not brands or OEMs per se; however, inferences can be drawn for them by simply using percentage of returns equal to the market share of the brand or OEM. The facilities are chosen from given location options, there is no free choice. Further, we consider a profit maximization model that does not incorporate any penalty for lost returns and customer dissatisfaction.To conclude, this paper considers several practical issues and describes a framework that provides near optimal profitable solutions for managing product returns in India. Our application of system dynamics for estimating returns is a novel one and we pre-state the model purpose explicitly. This has significant theoretical and practical implications in terms of applicability and utility that needs to be explored further. At methodological level, our framework combines descriptive modeling with optimization technique, while at topological level; it provides detailed solutions for network configuration and design. This framework and solution approach may be extended further to meet specific requirements. It can easily incorporate multiple cost structures, market side considerations and constraints related to resource conservation perspective. It may be easily used for other potential products such as tires and batteries. Although, study was done in the Indian context, the framework may easily be applied to situations in other developing countries.Our next phase of study will mainly focus on experimentation with the optimization models using variations in processing times, processing costs, salvage rates and other sensitive and significant input parameters for these select categories of products to provide companies insights for various decisions related to RLND. The integrated model will be used to calculate the break-even values of returns for setting up various facilities for these categories of products in order to maximize overall profit during a ten-year period time-horizon. The insights and learning provided by results under different scenarios may be utilized as inputs for decision-making by various stakeholders and decision-makers.Developing and further improving RL concepts means that it will be (more) beneficial for manufacturing companies to implement recycling, refurbishing and remanufacturing operations for economic reasons alone besides meeting the consumer pressures and regulatory norms. By determining the factors that most influence a firm's RL undertakings, it can concentrate its limited resources in those areas. Areas and topics such as integrated logistics for network design - under which circumstances should returns be handled, stored, transported, processed jointly with forward flows and when should they be treated separately, comparing cost of remanufacturing with cost of production from virgin materials, potential attractiveness of postponement strategies in RL, change in a firm's RL strategy for a particular product over the course of the product's life and modeling for situation when customer returns cannot be turned down (cost minimization model) may be explored for further research.
|
Perceived organizational support as a mediator of the relations between individual differences and psychological contract breach
|
[
"Psychological contracts",
"Organizational culture",
"Affective psychology",
"Work ethic",
"United States of America"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: Over the past 20 years there has been a great deal of research attention devoted to the study of employees' psychological contracts (e.g. Coyle-Shapiro and Conway, 2005; Dabos and Rousseau, 2004; De Cuyper and De Witte, 2006; Deery et al., 2006; Ho et al., 2006; Hui et al., 2004; Lester et al., 2007; Sutton and Griffin, 2004; Rousseau, 1989; Suazo et al., 2009; Taylor et al., 2009; Tekleab et al., 2005; Thompson and Hart, 2006; Turnley et al., 2004; Zhao et al., 2007). In general, the research on psychological contracts has focused primarily on examining the outcomes of psychological contract breach (Conway and Briner, 2005). This line of inquiry indicates that the perception of psychological contract breach (PCB) is associated with a wide array of negative workplace outcomes including reduced work performance (e.g. Robinson, 1996; Robinson and Morrison, 1995), less willingness to engage in organizational citizenship behavior (e.g., Kickul et al., 2004; Suazo et al., 2005), reduced trust in one's employing organization (e.g. Deery et al., 2006; Robinson, 1996), and increased cynicism about organizational life in general (Johnson and O'Leary-Kelly, 2003). Clearly, then, employees' perceptions of PCB have been associated with a whole host of attitudes and behaviors that are likely to have negative ramifications for employees and employers (Conway and Briner, 2005; Zhao et al., 2007).
Psychological contracts and psychological contract breach: The psychological contract is defined as an employee's beliefs regarding the terms and conditions of the reciprocal exchange agreement between that individual and the employing organization (Rousseau, 1989; Schein, 1965). That is, the psychological contract is based on the promises made between the employee and employer that determine what each party is expected to provide to and receive from the other (Rousseau, 1995). Perceived PCB occurs when an employee believes that he or she has received less than what was promised by the employer and, thus, the organization has failed to honor its commitments (Robinson and Morrison, 1995).The concept of the psychological contract can be traced to the seminal work of Argyris (1960), Levinson et al. (1962), and Schein (1965). With the exception of this early work, very little research (i.e. Kotter, 1973) was conducted on the psychological contract until Rousseau (1989, 1995) reconceptualized the construct. Rousseau's (1989, 1995) reintroduction of the construct spawned a flurry of research activity on the relations between perceived PCB and workplace outcomes. This stream of research has found that perceived PCB is related negatively to a variety of workplace attitudes and behaviors such as job satisfaction (e.g. Sutton and Griffin, 2004; Tekleab and Taylor, 2003), trust (e.g., Deery et al., 2006; Robinson, 1996), organizational commitment (e.g. Bellou, 2008; Lester et al., 2002; Robinson, 1995), in-role behavior (e.g. Lester et al., 2002; Johnson and O'Leary-Kelly, 2003), and organizational citizenship behavior (e.g. Suazo, 2009; Turnley and Feldman, 2000). In contrast, relatively little research has examined predictors of perceived PCB (e.g. Raja et al., 2004), which is the focus of this study.
The relation between individual differences and psychological contract breach: Although theoretical research has indicated that individual differences may predict perceived PCB (e.g. Rousseau, 1990, 1995; Rousseau and Parks, 1993; Turnley and Feldman, 1999), there have been relatively few empirical investigations of how these differences relate to individuals perceptions of the psychological contract (e.g. Raja et al., 2004). In this paper, we hypothesize that that both affective based individual differences (positive affectivity, negative affectivity) and cognitive based individual differences (reciprocation wariness, equity sensitivity, PWE) are likely to predict perceived PCB. By investigating the characteristics highlighted in this study, we aim to shed light on the highly idiosyncratic nature of the psychological contract.Affective disposition
Perceived organizational support: POS is another construct based on the ideas of social exchange theory (e.g. Blau, 1964). The POS construct was developed by Eisenberger et al. (1986) to describe an employee's perception of the organization's commitment to him or her. POS is based on the experiences of the employee and leads to attributions "concerning the benevolent or malevolent intent of the organization's policies, norms, procedures, and actions as they affect employees" (Eisenberger et al., 2001, p. 42).According to research on POS, employees form global beliefs regarding the value that the organization places on their well-being and contributions (Eisenberger et al., 1986). Employees high on POS believe that the organization cares about them and values their contributions. The result is that high POS employees tend to have greater levels of affective commitment and attachment to their organizations than low POS employees. Moreover, when compared to low POS employees, high POS employees have stronger beliefs that greater efforts will lead to greater rewards.Most prior research has examined the relation of POS on individuals' work attitudes and behaviors. For example, research has found that POS is positively related to perceived obligations to the organization (Eisenberger et al., 2001), job satisfaction (Eisenberger et al., 1997), organizational commitment (Allen et al., 2003), in-role performance (Eisenberger et al., 2001), and organizational citizenship behaviors (Coyle-Shapiro et al., 2006). Prior research has also found that POS is negatively related to withdrawal behavior (Cropanzano et al., 1997) and intention to quit (Wayne et al., 1997). POS has been found to be positively related to high quality supervisor-subordinate relationships (Wayne et al., 2002) and POS "appears to be an important source of esteem, affiliation, emotional support, and approval in the workplace" (Armeli et al., 1998, p. 293). In the psychological contract domain, Dulac et al. (2008) found that POS was negatively related to perceived PCB and that POS moderated the positive relation between perceived PCB and psychological contract violation such that the relation was stronger for low POS employees than high POS employees. POS has also been reported to mediate the relation between procedural justice and perceived PCB (Tekleab et al., 2005).Although the outcomes of POS have been widely studied, relatively less attention has been paid to the predictors of POS. In the studies to date, researchers have found that human resource practices (Allen et al., 2003; Hutchison and Garstka, 1996; Wayne et al., 1997), intrinsically and extrinsically satisfying job conditions (Stinglhamber and Vandenberghe, 2003), and organizational justice (Liden et al., 2003; Moorman et al., 1998) are predictors of POS. Interestingly, the research on both the predictors and outcomes of POS has focused almost exclusively on situational variables. We are aware, for instance, of only two published studies (Hui et al., 2007; Lilly and Virick, 2006) that directly examined individual difference variables as predictors of POS. In those studies, Hui et al. (2007) found that positive affectivity was positively related to POS, and Lilly and Virick (2006) found that internal locus of control was positively related to POS.Building on the theoretical and empirical research on POS we advance a proposition that has yet to be studied. Namely, that POS mediates the relations between individual differences and perceived PCB. Our premise is that individual differences should predispose individuals toward high or low levels of POS and that POS should, in turn, predict perceived PCB. This premise is supported by Hui et al. (2007) and Lilly and Virick's (2006) findings that positive affectivity and locus of control, respectively, are positively related to POS. Moreover, when the employee's general assessment of organizational support is high, then the individual is likely to interpret specific events more positively and make more favorable attributions about the organization across the board. Thus, when employees experience a high level of POS, unpleasant aspects of one's work experience are less likely to be framed as breaches on one's psychological contract. In contrast, when the individual perceives that the organization cares relatively little about its employees (low POS), then the same events are more likely to be perceived as PCB.H5. POS will mediate the relations between (a) positive affectivity, (b) negative affectivity, (c) reciprocation wariness, (d) equity sensitivity, and (e) PWE and perceived PCB.
Method: Sample
Results: The means, standard deviations, and correlations for the measures used in this study are presented in Table I. All measures were evaluated to be reliable (Cronbach's alphas ranging from 0.75 to 0.95). Cronbach's alphas for the measures appear along the diagonal of Table I.The relations between individual differences and perceived PCB
Discussion: The present study focused on gaining a better understanding of several potential predictors of perceived PCB. In particular, we had two primary objectives in conducting this research. Our first objective was to examine the relations between organizationally relevant individual differences and perceived PCB. Specifically, we examined five individual characteristics (positive affectivity, negative affectivity, reciprocation wariness, equity sensitivity, PWE). The results are supportive of our hypotheses for all of the characteristics examined. Moreover, these results remained significant even after controlling for organizational tenure, time under current supervisor, and gender. The findings are important for the development of psychological contract theory because they highlight the need to gain a better understanding of how individual differences might relate to the perception of PCB. Developing a better understanding of individual differences may provide insights into the highly idiosyncratic nature of psychological contracts.Our second objective was to examine whether POS mediated the relations between individual differences and perceived PCB. The findings indicated that POS fully mediated the relations between individual differences and perceived PCB for four out of the five characteristics examined (positive affectivity, reciprocation wariness, equity sensitivity, PWE) in this study. POS was found to partially mediate the relation between negative affectivity and perceived PCB. These findings indicate that individual differences may be related to perceived PCB indirectly through generalized perceptions of organizational support (or the lack thereof).This study extends prior work related to three separate streams of research. In particular, it extends prior research focused on individual differences, perceived PCB, and POS. In the individual differences domain, our research adds to the growing body of knowledge regarding potential mediating variables in the relations between individual characteristics and important outcomes in organizations. For example, Parker et al. (2006) found that cognitive-motivational states (i.e. role breadth self-efficacy, flexible role orientation) mediated the relation between proactivity and proactive work behavior. Lilly and Virick (2006) reported that POS mediated the relation between locus of control and procedural and international justice. Our contribution to this stream of research is the finding that POS mediated the relations between various individual differences and perceived PCB. This finding suggests that the relation between individual characteristics and perceived PCB tends to be indirect. That is, personality traits and other individual differences tend to be related to individuals' perceptions of the extent to which organizations care about their employees. This general perception, in turn, is related to the employees' interpretations of specific organizational events related to psychological contract fulfillment. Thus, these findings may help to explain why two employees working side-by-side and experiencing the same organizational or group environment may interpret their psychological contracts very differently.In the perceived PCB domain, our research extends the literature on predictors of perceived PCB (Coyle-Shapiro and Neuman, 2004; Raja et al., 2004). For example, Raja et al. limited their inquiry to a few specific personality traits primarily associated with the Big 5 personality dimensions. In our study we included both personality traits (i.e. positive affectivity, negative affectivity) and other types of individual differences (e.g. reciprocation wariness, PWE). By including characteristics that were not considered by Raja et al. or by Coyle-Shapiro and Neuman (2004), our study extends the literature on individual differences as predictors of perceived PCB.In several instances, our results support and extend those reported by Raja et al. (2004). For example, Raja et al. hypothesized that neuroticism would be negatively related to perceived PCB. What Raja et al. found was a statistically significant relation in the opposite direction. Given that neuroticism corresponds to negative affective disposition (George, 1992; Meyer and Shack, 1989), Raja et al.'s result is consistent with our finding that negative affective disposition is positively related to perceived PCB. Our findings also provide support for the generalizability across societies of individual characteristics as predictors of perceived PCB, as our study provided an opportunity to sample employees in the USA whereas Raja et al. sampled employees in Pakistan.Finally, in the POS domain, researchers have just begun to investigate individual differences as predictors of POS. We were only able to find two empirical studies (Hui et al., 2007; Lilly and Virick, 2006) that directly examined individual differences as predictors of POS. Hui et al. (2007) reported that positive affectivity is positively related to POS. Lilly and Virick (2006) found that internal locus of control is positively related with POS. Additional preliminary evidence for individual characteristics as determinants of POS has also been presented in a study where the correlation of personality with POS was reported although personality was not the main focus of the study. In particular, even though George et al. (1993) did not explicitly examine personality as a predictor of POS in their study on induced stress among nurses that results from working with AIDS patients, an examination of their correlation matrix suggests that there is a negative relation between negative affective disposition and POS (-0.16, p<0.01). Therefore, our study significantly expands the published evidence on the relations between individual differences and POS.In particular, our results suggest that individuals high on positive affectivity perceive high levels of POS, individuals high on negative affectivity perceive low levels of POS, reciprocation wary individuals perceive low levels of POS, high equity sensitive individuals (i.e. entitled individuals) perceive low levels of POS, and high PWE individuals perceive high levels of POS. Our findings may be of consequence to research on POS because they help to shift the focus from organizational level determinants of POS to individual level determinants of POS.Practical implications
|
Unfold studio: supporting critical literacies of text and code
|
[
"Critical literacy",
"Computational thinking",
"Literacy",
"Design-based research",
"Computer science education",
"Computational literacy",
"Multiliteracies",
"K-12 computer science education"
] |
Summarize the following paper into structured abstract.
Introduction: Literacy is about much more than learning to read and write. The practices which emerge within networks of people and texts often have prosaic goals such as conveying messages, documenting agreements and establishing authority, but they can profoundly reshape participants' cognition, identity practices and social relationships. Ong (2013) argues that privileged access to reading and writing led to the emergence of new social roles high in the status hierarchy, and that widespread literacy in a society "restructures consciousness" (p. 77) by synchronizing frames of reference such as dates, facts and perspectives on the world. In addition to supporting practices which define social roles and relationships, Scribner and Cole (1978) found that reading and writing were associated with changes in individual cognition such as improved abstract communication, memory and language analysis skills (pp. 27-29). It is not necessary to argue for a direct causal link between reading and writing and cognitive change, rather they may be seen as tools which have the potential to spur a different developmental path for the individual and for the society (Vygotsky, 1980).
Background: Literacy spaces
Workshops I and II: developing Unfold Studio: This section reports on the initial development of Unfold Studio through Workshops I and II with middle-school students. An iterative design process focused on emergent and imagined interactive storytelling practices helped develop the Web application and the analytical framework described above. These results framed a hypothesis that interactive storytelling could be particularly effective in supporting critical change within and beyond the classroom literacy space. Workshop III designed to test this hypothesis is reported in the following section.
Workshop III: toward critical multiliteracies: The result of Workshops I and II was a medium capable of supporting textual-computational literacy practices through interactive storytelling, and a hypothesis that these practices could be particularly effective in supporting critical change within and beyond the classroom literacy space. Following Schwartz et al.'s (2008) suggestion that design-based research ought to move from innovative design toward efficiency, we designed Workshop III to test this hypothesis.
Discussion: The design-based research reported in this article yielded fruitful answers to the initial research questions. The first two studies explored the potential uses of interactive storytelling and developed the Web application's affordances to better support participants' aspirations for the medium. Workshop III validated critical discourse models as tools for critical engagement and documented the role of textual and computational affordances. In each workshop, the participants were involved in planning the workshop, framing the questions and interpreting the results. Their participation was essential to the validity of the findings and also to ensuring that the research process could play an equitable role in the literacy spaces which were the focus of study.
Conclusion: As our society completes its shift from print text to digital media, schools must prepare youth to participate in new forms of literacy. It is clear that computational media do not necessarily lead to the just, peaceful and inclusive social structures imagined by the pioneers of personal and social computing. Indeed, computational media have enabled powerful new forms of surveillance, control and amplification of oppressive ideologies. If we want to support youth in self-authorship, critical agency and participation in designing socio-technical futures, it is imperative that our schools cultivate critical computational literacies which center the lives and identities of the community. The design-based research reported in this article yielded a concrete step toward that goal. As Unfold Studio makes its way into classrooms and writing clubs, future research will continue the project of developing a medium well-suited to supporting critical literacy practices.
|
Benchmarking company performance from economic and environmental perspectives: Time series analysis for motor vehicle manufacturers
|
[
"Benchmarking",
"Performance measure",
"Time series forecasting",
"Motor vehicle manufacturer"
] |
Summarize the following paper into structured abstract.
1. Introduction: Growing concerns on the environmentally sustainable development call for data analysis from both economic and environmental (E&E) perspectives. For instance, to access the E&E performance of different countries, data analysis has been performed via analytical applications by the System of Environmental Economic Accounting. Unlike such data analysis which is at the national level or even broader global levels, this research focuses on E&E performance analysis at the company level.
2. Company performance measurement: Stakeholder theory suggests that companies should go beyond shareholders' interests to include other stakeholders (Keeble et al., 2003; Pullman and Wikoff, 2017). In terms of company performance from E&E perspectives, key stakeholders of MVMs consist of customers, business partners, owners, employees, investors, government, non-government organizations (NGOs) and non-profit organizations (NPOs). The main concerns of stakeholders regarding company performance are listed in Table I. Based on different stakeholders' concerns and literature review, company performance measures from E&E perspectives are identified.
3. Development of the performance measurement approach: Take the five limitations into account, this research aims to develop an approach with five requirements, namely, it is with an integration of measures from E&E perspectives; it is designed for MVMs by taking into the specific background into consideration; it is based on data which are available from public documents; it is mathematically constructed with transparency in generating time series data; and it provides a forecast value for benchmarking the future performance of MVMs in the following fiscal years.
4. Case study: 4.1 Sample cases
5. Discussion: This research develops an approach to measuring the performance of MVMs from E&E perspectives. An index IMVM is constructed as the performance from E&E perspectives. Its historical data during FY2008 to FY2017 is generated by Equation (12). In addition, its future data in FY2018 are generated by ARIMA models of the best fit. Benchmarking has been recognized as one of the most widely known improvement techniques or tools in the world (Al Nuseirat et al., 2019). The data out of this research can contribute to benchmarking the historical performance (during FY2008-FY2017) of MVMs relative to their competitors as well as the forecast performance in FY2018.
6. Conclusions: This research developed an approach to measuring the performance of MVMs from E&E perspectives. The integration of eight measures from E&E perspectives answered the first sub-question. An index IMVM and ARIMA models of the best fit are constructed to generate the time series data of this performance. This answered the second sub-question as well as the main research question.
|
The eight imperatives of effective adult learning: Designing, implementing and assessing experiences in the modern workplace
|
[
"Assessment",
"Adult learning",
"Learning design",
"Training effectiveness"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: A senior HR leader once commented: "There is no getting around the fact that lots of us in HR and talent management are not really good at facilitating adult learning with our people... We tend to throw information at people and hope that it sticks. In the end, we only create trouble and problems...We do not take adult learning seriously."
Key imperatives for adult learning: Imperative No. 1: relevance, importance and utility are paramount
In closing: We have shared these learning imperatives with the purpose of getting you to think through the specific things that can help to increase and accelerate learning across the spectrum of adult-learning opportunities in the modern workplace.
|
Trust building in supply chain partners relationship: an integrated conceptual model
|
[
"Supply chain management",
"Trust",
"Channel members",
"Channel relationships",
"Supply chain relationships",
"Concept of trust",
"Trust building models",
"Perspectives of trust",
"Perspectives of risk"
] |
Summarize the following paper into structured abstract.
Introduction: Trust is often referred as an essential element for successful supply chain partner's relationship. Spekman and Davis (2004) argued that trust is at the heart of managing risk and a prerequisite (Kasperson et al., 2003) in supply chain. Agarwal and Shankar (2003) argued that one of the prevalent issues in the introduction of e-commerce system along the supply chain is the ability to establish the dynamic and flexible structures for buyer-supplier relationships and online trust that, deterministically, drive both the parties toward strategic partnerships and cooperation. Sinha et al. (2004) mentioned lack of trust is one of the major factors that contribute to supply chain risks. With the emergence of RFID-based u-commerce, the issue of consumer trust actually has gained additional importance because consumers are usually more concerned about the trust issue whenever new technologies are introduced in commerce (Lee, 2007). Reciprocal relationship of trust in management is vital for enhancing not only performance but perhaps also minimizing the incidence of charlatan behavior if morale and employee commitment are high (Gbadamosi et al., 2007). Thus, researchers and practitioners are turning their attention to the concept of trust as a mechanism enabling managers to achieve organizational openness and ultimately, competitiveness while reducing social uncertainty and vulnerability (Mollering, 2004).Despite the availability of vast literature on trust, there is no clear understanding of concept of trust referring to supply chain partner's relationship, as Halliday (2003) mentioned, there is no construct of trust with a clear definition, or even one complex definition. According to a number of guest editorial review articles of special issues of management journals (e.g. Harrison, 2003; Mollering, 2004; Arnott, 2007), there is a need for studies on conceptual issues and importance of empirical testing of multiconstellations of trust with respect to vulnerability and risk, nature and extent of uncertainty and urge to build an integrated view of trust. In supply chain management still there is an unresolved issue, i.e. how to build trust in supply chain partner's relationship?To address the above issue this paper has developed an integrated conceptual model for building trust in supply chain partner's relationship referring to various trust building models in the literature. Though the model represents the trust building process at dyadic level the concept can be simply extended to any number of levels and perspectives. The next section provides an understanding of concept of trust, the section next to that discusses various trust building models, further section provides an integrated conceptual model for supply chain partner's relationship. Finally paper concludes with directions for future research on trust building process.
Concept of trust: From a careful analysis of various definitions of trust in the literature we can note that trust relation implies the participation of at least two parties, a trustor and a trustee. The trustor is the party who places him or herself in a vulnerable situation under uncertainty. The trustee is the party in whom the trust is placed, who has the opportunity to take advantage of the trustor's vulnerability. Similarly there are two concepts of trust in the literature. The first stream of concept is based on the argument that trust is embedded within the trustor (feelings, emotions and cognition) not in the trustee. For example, in psychology research, the frequently used definition of trust comes from Rotter (1967). In his definition, trust was conceptualized as a belief, expectancy or feeling that is deeply rooted in personality and has its origins in an individual's early psychosocial development. In social view trust is a belief that other people will honor obligations in varying context in an open commitment to promote social welfare through a mere conformity with conventions (Soroka et al., 2003). In management research, specifically, Zand (1972) described trust as a gradual, self-reinforcing phenomenon. McAllister (1995) believed trust is cognitive judgments of self about another's competence or reliability and an emotional bond of an individual toward the other person (referred as "affect-based trust"). Trust can refer to "the expectation that a person can have confidence in, or reliance on, some quality or attribute when undertaking a business transaction" (Small and Dickie, 1999). According to this stream of arguments trust is all about an individual's (trustor's) disposition to trust the trustee with benevolence and free will.The second stream of concept is based on the argument that trust is embedded within trustee. Trustee need not mean the other person. Trustee could be competency, ability, brand, a piece of equipment, technology, calculations, institutional system, or security, etc., depending on the context of trust. For example, Rousseau et al. (1998) interpreted trust in terms of perceived probabilities and suggest that in knowledge-based economy, a trustee's competence, ability and expertise become increasingly important as an indicator of his or her ability to act as anticipated. According to the definition of trust given by Doney and Cannon (1997), trust requires an assessment of the other party's credibility and benevolence, one party must have information about other party's past behavior and promises. According to Coleman (1990) individuals calculate the gains, which might result from their decision to trust another individual before they actually make their decision to trust each other. Bachmann (2001) argues that inter-organizational trust is especially dependent on and mediated by the institutional framework in which the relationship is embedded. According to Lippert (2001) technology trust is an individual's willingness to be vulnerable to the technology based on expectations of predictability, reliability, utility and influenced by an individual's predisposition to trust technology. Delgado-Ballester et al. (2003) define brand trust as "the confident expectations of the brand's reliability and intentions in situations entailing risk to the consumer." According to this stream of concept trust is all about how trustworthy the trustee is and also it is partially a product of trustor's capacity to assess the trustworthiness of trustee.From both the concepts it can be noted that the trust is trustor's choice, either a rational or non-rational. Deutsch (1958) describes trust as a non-rational choice of a person faced with an uncertain event in which the expected loss is greater than the expected gain or a rational choice based upon optimistic expectations or confidence about the outcome of an uncertain event, given personal vulnerability and the lack of control over the action of others (Zand, 1972). Further trust cannot exist in an environment of certainty (Bhattacharya et al., 1998), some level of uncertainty is required for trust to emerge (Dasgupta, 1988). Therefore trust is a relatively informed attitude or propensity to allow oneself and perhaps others to be vulnerable to harm in the interest of some perceived greater good (Michalos, 1990) and hence it is a risky engagement (Luhmann, 1979). Finally as argued by Laeequddin et al. (2009) trust is a threshold level of a supply chain member's (trustor's) risk-bearing capacity related to trustee. Beyond the trustor's risk-bearing capacity the subject of trust turns into risk management rather than a matter of trust.
Trust building models: Over the years researchers from different fields have developed trust building models considering various factors from three key perspectives emphasizing that the trust building process is dependent on trustee's characteristics (e.g. ability, benevolence, integrity, credibility, etc), rational (e.g. calculations, cost/benefit, technology, etc.) and institutions (e.g. contracts, agreements, control mechanisms, security, etc.) assuming that the trustor will perceive or evaluate them positively. Some of the most referred trust building models in the management literature are: Lewicki and Bunker (1995), Mayer et al. (1995), Doney and Cannon (1997), Sheppard and Sherman (1998), Tan and Thoen (2001), Nooteboom (1996), Bhattacharya et al. (1998) and Das and Teng (1998). In order to develop an integrated model of trust building in supply chain partner's relationship we have selected only five appropriate models and discussed within the limitation of space.Trust development model - Lewicki and Bunker (1995)
Trust building in supply chain partners relationship - an integrated conceptual model: There are two important points to be noted in trust building process. The first point is "information is pivotal to trust building." When the supply chain members have access to complete mutual information about partner's reliability, calculations, consequences, controls and they are certain that there is no risk involved in the relationship, then trust has no relevance; complete knowledge obviates the need for trust but it can be there (i.e. no risk=trust). On the other hand when the partners lack mutual information and they are in the state of total ignorance of future outcome of the relationship, there can be no reason to trust and it need not be there, as risk prevails (i.e. risk=no trust). The second point is that some level of uncertainty is required for trust to emerge (Dasgupta, 1988) and propensity to trust lead to risk taking.The other arguments in the trust building process that can be noted are in order to trust one does not need to risk anything, however, one must take risk in order to engage in trusting action. Trust is the willingness to assume risk (behavioral trust is assuming of risk). If the level of trust surpasses the threshold level of perceived risk, then the trustor will engage in the RTR. If the level of perceived risk is greater than the level of trust, the trustor will not engage in the RTR (Mayer et al., 1995). Individuals only engage in transactions if their level of trust exceeds their personal threshold (Tan and Thoen, 2001). Conversely, trust in a relationship will limit at the threshold level of a supply chain member's risk-bearing capacity (Laeequddin et al., 2009). Under power-based and highly dependent context, the weaker party may find the stronger party as "risky" (not trustworthy). However, under compelling situations of dependency, the weaker party may find the stronger party as not trustworthy but at the same time if the probability of risk is bigger than not engaging in the relationship, then the weaker party will evaluate the stronger party as not trustworthy but "risk worthy" for short-term financial/non-financial gains, staying in the relationship as long as the relationship context changes, knowing that they are being controlled and monitored. Research within social exchange has shown that the risk of being exploited in social relations facilitates some degree of commitment and attachment building as a way of reducing uncertainty (Molm et al., 2000). We can consider this relationship as "risk worthy" relationship. As trust is willingness to take risk (Mayer et al., 1995), risk worthiness can be interpreted as trust worthiness. Therefore trust and risk are complementary to each other (i.e. risk=no trust; no risk=trust; risk worthy=trustworthy). Since trust cannot be easily measured, we can evaluate the quantifiable perspectives of risk in relationship and map them in terms of trust.Step 1 - characteristic trust building
Conclusions: The main contribution of this paper is that it drives the idea that trust and risk are interlinked and trust cannot be built as one-dimensional phenomenon. In contrast to past approaches that often present trust as a complicated and multifaceted concept and portrays trust building as an incredible challenge to establish and may be even harder to maintain it, our integrated conceptual model suggests that, simple evaluation of supply chain member's risks from characteristics, rationale and institutional control/security perspectives and bringing them to within the bearable limits can lead to trust building. As trust is context-dependent phenomenon we cannot study trust under all possible business contexts to design trust building models. Therefore supply chain partners should approach trust building process from risk perspective and evaluate the relationships as "risky," "risk worthy" or "not risky" and translate them in terms of trust perspectives as "no trust," "trustworthy" and "trust".Since trust is a trustor's choice, a supply chain member is likely to engage in act of trust only when his rational risks, related to other member's technology, economics and dynamic capabilities are within his bearable limits. Therefore any amount of characteristic trust building such as commitment, credibility, integrity, emotion, etc. is not going to make trustor vulnerable to technology or economics risk of relationship. Even if the rational risk levels are less than the risk-worthy personal characteristics, or control mechanisms, members may not take the risks bigger than their bearable capacities and indulge in act of trust. Also it is important to note that the presence of strong institutional systems by themselves does not build trust but the presence of institutional system reduces the risk and risk reduction builds trust. If there are no risks, institutions have no role to play. Hence supply chain managers should evaluate various risk perspectives to build trust rather than attempting to build trust without considering the risk dimensions or reference points of trust. Often trust is assumed as long-term reinforcement process but our model proves that trust building can be an instant process if the risk levels can be evaluated.
Direction for further research: Trust researchers seems to have focussed overwhelmingly on the trustee's characteristics such as integrity, benevolence, credibility, honesty, transparency, etc. to build trust and stereotyped the research identifying antecedents and consequences of trust in various contexts presuming trust as one-dimensional phenomenon. The problem is as trust is context-dependent phenomenon how many contexts can be studied to design trust building models. Under dynamic business environment how the contexts can be fixed. Trust building literature fails to address what is the trust threshold, what are the reference points of trust building in business relationship. Where does the starting, optimum and maximum trust points lie in trust building process. Further a fundamental question of debate in business management is should the managers strive to build trust or strive to reduce risk. Despite the availability of abundant literature on trust still trust is viewed as a complicated and multifaceted concept. To break such notion, get a conceptual clarity, for understanding which attitudes and behaviors drive the propensity, act of risk, etc. more studies based on case study, grounded theory or ethnography methods are needed.
|
Why does bank screening matter? Private information and publicly traded securities
|
[
"Securitization",
"Policy analysis",
"Bank screening",
"Information production"
] |
Summarize the following paper into structured abstract.
1. Introduction: This paper develops a general equilibrium model to trace how privately produced bank screening information affects the prices of publicly traded securitizations[1]. The model demonstrates that while ex ante screening can reduce the risks of a securitized portfolio, investors require details of both publicly available and privately produced information to price the portfolio correctly. This requirement is significant because the model shows further that banks can profit by foregoing screening if their market share is sufficiently large and if investors have difficulty detecting changes in bank screening policy.
2. Review of literature: It has long been held that banks and market financiers produce different kinds of loan information. DeLong (1991) and Ramirez (1995) both suggest that banks traditionally screened borrowers more rigorously than did market agents. Moreover, in recognition of the moral hazard and adverse selection problems stemming from these informational asymmetries, loan contracts have been proposed to mitigate their effects. For example, Gorton and Kahn (2000) show how banks' loan governance can affect borrowers' project choices and, consequently, loan default distributions. Holmstrom and Tirole (1997) show how either borrower-provided capital or collateral can reduce the default risk attributable to moral hazard.
3. Model description[9]: This section develops a one-period model to find equilibrium prices for unscreened securities and bank-issued securitized instruments. In essence, banks reduce the risk of individual loan defaults through screening individual loan applications[10]. Assume there are K identical borrowing clients, all of unit size, wishing to sell their primary securities either to banks or to market agents, both of whom are assumed to be expected profit maximizers. We suppose that financial system demands for these primary securities are competitive in that when the original client sells her unit loan contract, either to the market agent or to the bank, she receives the same price SC. Each client thus presents a financier with a proposition to raise funds equal to SC in exchange for a security promising a payoff distribution C.
4. Model implications: Section 3 shows that screening can reduce the risks of bank-sold securitizations. But this section shows that, depending on market share, a bank can profit by foregoing that screening whenever investors cannot detect the changes. The section shows further that "skin-in-the-game" and similar policies cannot fully offset these incentives. On the other hand, sale of the securities with recourse can, if appropriately structured, do so effectively. Finally, the section comments on reasons for tranching securitized issues as well as for the growth of securitization.
5. Conclusion: This paper shows how bank screening affects the equilibrium price of securitized instruments: bank screening can reduce the risk of a portfolio at a cost of reducing its expected return. At the same time the relative prices of screened and unscreened portfolios also depend on such other factors as the proportions of each type as well as on the market conditions for funding the original loans.
|
Educational alliance for a sustainable Toronto: The University of Toronto and the City's United Nations University (UNU) Regional Centre of Expertise
|
[
"Higher education",
"Sustainable development",
"Canada",
"Centres of excellence"
] |
Summarize the following paper into structured abstract.
Introduction: Urban theorist, Mike Davis, points out that the present global urban population of 3.2 billion is "larger than the total population of the world in 1960" as each week, cities absorb a million babies and migrants (Davis, 2004). Accompanying such growth is a complexity of social, economic and environmental pressures that demand new educational strategies, both within our formal as well as informal delivery systems.The Toronto Regional Centre of Expertise (RCE) is a recent initiative that aims to strengthen and enhance education for sustainable development (ESD) within the urban area. It consists of a group of universities, school boards, NGOs, government and other community representatives, each of whom recognize the need for collaboration in the face of rising environmental and urban challenges.This paper will begin with a brief description of the City of Toronto, pointing to some of the challenges that it currently faces and to a key initiative of the City in the area of climate change that the RCE has also decided to take as its current focus.Following an overview of the history of the establishment of the Toronto RCE, the paper will move to describe both the role that the University of Toronto has had in the development of such a Centre, as well as how the RCE is helping to advance the University's own Five-Year Plan. Finally, the paper closes with some general conclusions and recommendations, arguing that these sorts of inter-institutional collaborations are essential in order to advance interdisciplinary environmental education, although there are certainly challenges that must be overcome if success is to be assured.
The City of Toronto: a brief overview of environmental priorities: Like many other North American cities, Toronto faces the problem of how best to encourage economic expansion within environmental constraints. As the largest city in Canada, it supports a population of 2.7 million people within a region of the "Golden Horseshoe," whose combined municipalities house approximately seven million people and which expects to grow by more than three million over the coming thirty years (City of Toronto, 2006). The pressures of accommodating such growth are clear and, in light of these pressures, urban intensification is becoming an increasing priority.In addition to these demographic demands, the City also faces special environmental pressures on various fronts, from degrading air quality to waste management. On the energy front, electrical transmission lines into Toronto have reached capacity, so there is a need to both identify alternative energy sources as well as to curb demand through new conservation efforts. Infrastructure renewal is a priority, with growing pressures on road maintenance, public transportation facilities, wastewater treatment and other community needs.At the same time, Toronto is internationally known as a multicultural city that is inclusive and diverse in nature. Education about sustainability becomes a more complex challenge in terms of ensuring culturally-sensitive communication strategies.Recognising these and related environmental pressures, the City has been proactive in addressing the challenges of urban sustainability. Toronto has been acknowledged as being "among the top five low carbon leaders internationally," winning more than 50 awards for quality and innovation in delivering public services and, in 2005, the Suzuki Foundation in Canada identified the City as the North American leader on combating climate change (City of Toronto, 2006; City of Toronto Environment Office, 2007).Certainly, the City has taken some important steps to address the problems of climate change that the Mayor defines as "the issue of our time-of all time" (City of Toronto, 2007). From creating more than $80 million in energy retrofits to installing a Deep Lake Water Cooling system, Toronto has made the reduction of greenhouse gas emissions a major priority.A significant step forward occurred in July of 2007, when City Council unanimously backed the Mayor's "Toronto Climate Change, Clean Air and Sustainable Energy Action Plan," aimed at substantially exceeding Kyoto greenhouse gas reduction targets. The Plan delivers several programs to encourage Torontonians to adopt more sustainable lifestyles, such as "Live Green Toronto," which offers a "one-window" source of information on programs related to energy and environmental issues (City of Toronto, 2008). From encouraging green business strategies to reviewing the city's transit plan; from doubling the tree canopy to promoting local food production, the plan is a comprehensive effort to move the City's environmental agenda forward in an even more resolute fashion.An important element is dedicated to "inspiring action" by promoting public awareness to "help Torontonians understand the need to reduce their energy use and what actions they can take at home, work and on the road." (City of Toronto Environment Office, 2007).The United Nations University Toronto RCE has elected to dedicate its efforts specifically to advancing this objective of advancing public awareness, understanding and action in light of the challenges that lie ahead in implementing the City's Climate Change, Clean Air and Sustainable Energy Action Plan.
The evolution of the Toronto RCE: The initiative to establish the Toronto RCE dates back to May of 2005, when the Mayor contacted the United Nations University, requesting that the City be recognised as the first Regional Centre of Expertise in the Americas (City of Toronto Environment Office, 2007). The UNU officially supported the request and an initial meeting was held in April of 2006 to begin to gauge interest amongst potential collaborators.Originally, the Toronto Zoo took the lead together with other City officials in convening this meeting of representatives from formal, non-formal and informal educational sectors. In addition to municipal, provincial and federal representatives, others at the table included major universities and colleges, school boards, environmental non-governmental organizations (ENGOs), museums and the Toronto and Regional Conservation Authority. By June of 2006, a Steering Committee was formed to administer the governance and operations of the Toronto RCE and, within the following year, a Memorandum of Agreement was signed amongst representatives, enabling a more formalised collaborative structure.The Toronto RCE provides a forum for the sharing of information and forging of partnerships in the delivery of programmes in sustainable development, amongst educational institutions, governments, and community representatives. The goals are to:* enhance public awareness and understanding of sustainability;* improve the quality of public education and curriculum development;* encourage more sustainable behaviour and consumption patterns amongst Torontonians; and* support the development of specialised professional learning programs in the area of sustainable development (City of Toronto, 2006).Long-term objectives are to identify the sources of information surrounding sustainability issues, programmes and policies in Toronto, and seek to infuse them into the ongoing initiatives of educators, trainers and media representatives. In general, the aim is to "integrate local sustainability challenges into the information/education flow of formal, informal and non-formal education in the Toronto region" (City of Toronto, 2006).While the work of the RCE is currently focused upon the City of Toronto specifically, there is scope for considering expanding these initiatives to include broader membership across the Golden Horseshoe region, once concrete outcomes emerge at the local level.At the same time, though focused on local scales, the RCE works cooperatively with provincial and national organizations who are similarly dedicated to advancing ESD. For instance, linkages exist with the Ontario ESD Working Group, now named the Educational Alliance for a Sustainable Ontario (EASO)-one of a number of provincial working groups working to effect change in educational systems-as well as with LSF: Learning for a Sustainable Future, a non-profit Canadian group created to integrate sustainability into curriculums across the country. Environment Canada is helping to ensure that communication is strengthened between other, newer RCEs now in formation in other cities as well.In July, 2007, Toronto City Council committed to housing the RCE Secretariat for a two-year period, and to providing office space and staff support in the interim.Dr Ingrid Stefanovic, Director of the Centre for Environment at the University of Toronto, was elected to serve as inaugural Chair of the Steering Committee and it is in this capacity that the current paper is being developed.Finally, the Steering Committee recognized the lengthy and, therefore, somewhat awkward title of this collaboration as the "United Nations University Toronto Regional Centre of Expertise," and decided to adopt a new acronym for this new venture. Building upon the legibility of "EASO"-the Educational Alliance for a Sustainable Ontario-the committee approved the title of "EAST"-Educational Alliance for a Sustainable Toronto. In the forthcoming pages, it is this acronym-RCE/EAST-that will be used.
The University of Toronto and the educational alliance for a sustainable Toronto: As the City of Toronto has committed to providing space and staff time to developing this initiative, other members of the Steering Committee have been actively involved in helping to plan how to move forward. York University, Seneca College and the University of Toronto have each stepped into represent post-secondary commitments to RCE/EAST, while representatives of the Toronto District School Board speak on behalf of elementary and secondary school children's interests. Broader community concerns are brought forward by groups such as Citizen's Environment Watch, the Toronto and Region Conservation Authority and the Toronto Zoo, while provincial and federal interests are represented by Ontario's Educational Alliance for a Sustainable Ontario and Environment Canada, respectively.At the same time, the University of Toronto's Centre for Environment has taken a leading role in hosting a number of Steering Committee meetings; engaging students in various organizational and research activities; and in supporting the work of RCE/EAST as a whole. The Centre's Manager of Program Development and External Relations, Donna Workman, has been active from the start as a member of the RCE/EAST planning committee and she and her staff have worked with the Director of the Centre and City of Toronto's Kim Peters, to help to facilitate many of the initiatives described below.A number of specific activities have been organized and/or are being planned.ESD networking forum
RCE/EAST and the University of Toronto: How fruitful a collaboration?: Founded in 1827, the University of Toronto (UofT) is Canada's largest university, today straddling three campuses across the western region of the GTA in the suburb of Mississauga, to the eastern reaches of Scarborough. Globally recognised as a leading public teaching and research university, the university sustains annual enrolments of over 70,000 students, supported by over 11,000 faculty and staff. Committed to excellence, its library system is rated as one of the top four research libraries in North America, with over 15 million holdings. In fact, the University is known for its particularly strong research portfolio: if offers 75 PhD programs and, over the past two decades, faculty at UofT - though representing under 7 per cent of Canada's university professors-have received almost 25 per cent of all its national awards (University of Toronto, 2008).Stepping up: a framework for academic planning at the University of Toronto, 2004-2010 is the University's current academic planning document that was generated through extensive consultations across the three campuses amongst administrators, staff, faculty and students.In many ways, this document signals some significant shifts at a university, already widely recognized as one of Canada's best. Building beyond the university's widely recognized record of research excellence, Stepping UP articulates five priority objectives that, in the minds of many, represented some of the most innovative and important planning initiatives in the recent history of the institution (University of Toronto, 2004). Those objectives placed a priority upon:1. Enhancing the student experience.2. Promoting interdisciplinary activity (including developing a number of focused initiatives on the environment).3. Linking academic programs to research experiences.4. Ensuring that scholarship and academic programs will be relevant to, and have an impact on, the broader local, national and international communities through outreach and engagement in the processes of public policy.5. Achieving equity and diversity in all activities.While certainly not explicitly mandated to do so, the RCE/EAST has helped to promote these objectives and it has the potential to continue to do so. Opportunities have been presented to workstudy students and research assistants to link academic training with both community based research as well as significant outreach and engagement with public policy development by working with City of Toronto staff and their partners. Through tasks such as the analysis of the ESD survey results, those students have come to recognize the significance of linking issues of equity and diversity with environmental concerns. In identifying research priorities at the City of Toronto, students have seen the importance of connecting university research activities with city planning initiatives.Perhaps, most importantly, the RCE/EAST initiative promotes the university's objective of engaging in outreach and helping to impact upon the development of public policy, through interdisciplinary engagement in environmental concerns.The University of Toronto is unique in the breadth and strength of its disciplinary programs which is the basis for sound interdisciplinary activity. At the same time, the importance of interdisciplinary scholarship and research at the University is firmly embedded in Stepping UP. Certainly, the struggle to define rigor in interdisciplinary activities and ensure that both breadth and depth are preserved within those initiatives, has always been challenging. From the early part of the 20th century when Liberal Arts Colleges in the United States began to promote a notion of "general education" for the "whole person," the academic community has acknowledged the importance of moving beyond reductionist pedagogy and building bridges between the disciplines (Stefanovic, 2005).Nevertheless, the conversation about what constitutes appropriate "interdisciplinary" research and teaching is hardly a simple matter, as attested to by decades of publications by educators and theorists. (Klein, 1990). From the critique of specialization in the famous 1945 Harvard "redbook," General Education in a Free Society, through seminal documents such as the 1972 OECD Interdisciplinarity: Problems of Teaching and Research in Universities-"working tool" that continues to be one of the most widely cited references on interdisciplinarity to this day-the conversation about what constitutes appropriate interdisciplinary pedagogy continues (Klein, 1990; Salter and Hearn, 1997).That there are nuances to interpretations of "interdisciplinarity" becomes evident when one simply surveys the various terms currently in use, each signifying something slightly different. From the "pluridisciplinary" relations between disciplines of a similar nature (mathematics and physics, for example,) to the "cross-disciplinary" extension of one discipline into the area of another (ethics into business, for example)-the conversation is rooted in a need to identify the real nature, extent and measures of integrity of interdisciplinary education ( Jantsch, 1972).Certainly, interdisciplinary education and research present unique challenges, both in terms of the kinds of questions posed, as well as the approaches that are undertaken to move towards workable solutions (Stefanovic, 1996). In the words of Linda Carson, the curriculum developer for the new Bachelor of Knowledge Integration at the University of Waterloo, "What do we do when the problem is vast? When the dividing line between one problem and another is blurred? When the scope is global or subatomic? When there are many contributing factors, some unknown, some human and culturally dependent?... How do we systematically address problems that may be unsystematic?" (Carson, 2007).There are many different answers to this question. Sometimes, simply an additive, multidisciplinary juxtaposition of unrelated disciplines allows for multiple perspectives to be heard. At other moments, the answer is to engage in a transdisciplinary "transcendence" of individual disciplines, so that "each team member embraces and extends the ideas of others until the boundaries between 'mine,' 'his' and 'hers' dissolve" (Carson, 2007).In each case, if one follows convention and uses the term "interdisciplinary" as the umbrella concept for these different approaches, it is evident that something new needs to happen in some kind of synthesis and "bridge-building" between the disciplines, and in terms of the very process of understanding complex problems (Klein, 1990; Lattuca, 2001).When opportunities present themselves to universities to link their research activities with hands-on problems facing local communities, the demands for interdisciplinary approaches are real. Problem-based research requires multiple perspectives, as no one discipline will suffice to capture social, cultural, regulatory, technological, scientific, economic and ecological dimensions of lived experience.In that vein, the RCE/EAST presents the University of Toronto with a special opportunity to link academic researchers and students, with public policy makers, decision makers and broader environmental communities.Certainly, there are always special challenges to such bridge-building. Language is one: academics must learn how to translate their specialized, disciplinary terms, with minimal loss of integrity, into concepts that are more accessible to the broader community. Different expectations of methodological rigor may impede collaboration, as may differing epistemological interpretations of what constitutes "real" knowledge or ethical interpretations of what constitutes the "right" thing to do. Each of these challenges exist in the current collaboration between the University of Toronto and the RCE/EAST.However, exposing students to these kinds of challenges only broadens their horizons and their educational opportunities. Engaging faculty members in helping to solve current challenges of public policy is, frankly, morally incumbent upon the university, if it is to ensure that the academic community genuinely impacts on positive, societal change.Whether it is through multidisciplinary, interdisciplinary, transdisciplinary or pluridisciplinary initiatives, it is important to show students that environmental problems, from climate change to waste management, require broad consultations and that academic research in this area is only made more meaningful if links are made to genuine, problem-based approaches of study.In the words of Justin Trudeau, son of our former Prime Minister and currently an MA Candidate in Environmental Geography at McGill University, "right now, we desperately need to get young people connected with their communities" (Elliott, 2007). Jamie Biggar, an MA student at the University of Victoria, adds that "Environmental studies is inherently interdisciplinary. It's also humbling because it makes you appreciate the limits of your knowledge, and it's empowering, because once you get over that, it gives you the tools to approach complex issues" (Elliott, 2007).One of those tools consists of concrete opportunities to link research with public policy development, as has been provided through this unique collaboration between the University of Toronto and the multiple members of RCE/EAST.
Some challenges that remain: Despite the positive opportunities that present themselves in such interdisciplinary exchanges, the RCE/EAST also faces some serious obstacles that deserve mention here. In fact, it is no understatement to note that the way in which these obstacles are resolved in the coming months may spell the success or failure of the Toronto RCE initiative itself.First of all, it is not unreasonable to expect that, on some level, there may be different or even conflicting objectives in participating in RCE/EAST by the various stakeholders. For instance, like most universities, the University of Toronto is engaged in post-secondary education and research. The City of Toronto and its community partners, however, may be less interested in studying problems than in instituting on-the-ground, positive policy, program and design solutions.To be sure, the University's Stepping Up plan specifically aims to ensure "that scholarship and academic programs will be relevant to, and have an impact on, the broader local, national and international communities." Nevertheless, the challenge is to ensure that such "relevance" of community-based research also maintains standard measures of academic integrity.On the other hand, many RCE/EAST partners recognise the benefit of academic research to guide public policy development and specific educational initiatives. But in all of these perspectives, there will be new challenges that emerge to integrate academic study in a meaningful way, with targeted and concrete changes that move the city toward sustainability.As internationally renowned architect and planner, C.A. Doxiadis, remarked to a group of university friends many years ago, academics are, in some way, privileged, because they are offered the luxury of academic reflection on the meaning of a bridge, whereas the architect is on an actual deadline to build it! In many ways, those fundamental paradigms that separate the university from the community persist, and can potentially complicate project priorities.In fact, those separate expectations can also complicate funding opportunities as well. Typically, university faculty will apply for research grants that will only be awarded to projects that are innovative and advance understanding of a specific research question. In fact, few of those grants reward interdisciplinary studies and are geared instead to discipline-based research. It is no small challenge to identify funding sources for RCE/EAST projects that support both traditional sorts of academic research while also allowing for practical, city-focused benefits.A few additional problems merit mention here. The RCE/EAST Steering Committee is joined together by a formal MOU and a sincere commitment amongst its members to advance ESD in Toronto. Nevertheless, the RCE is at a crossroads: unless concrete projects are identified and funded in the near future that will engage each of the Steering Committee's member organisations, the future of the collaboration is at risk. In fact, the time commitment required to forge integrative, interdisciplinary ventures is so large and members are so busy, that coming months will be critical in terms of solidifying the future of RCE/EAST.
Conclusion: The UN Decade of Education for Sustainable Development presents challenges and opportunities to cities and to universities to work together in new ways to advance understanding and action in the area of sustainability. There are difficulties, ranging from different worldviews amongst partner organizations, to pragmatic challenges of securing funding for interdisciplinary projects that are both academic and practical in nature.Decades ago, renowned anthropologist, Dr Margaret Mead reportedly remarked how "we are now at a point where we must educate our children in what no one knew yesterday, and prepare our schools for what no one knows yet" (Carson, 2007). To this day, such a journey still demands new and innovative collaborative approaches to advance environmental education. Efforts such as RCE/EAST provide such novel opportunities to bring together universities, colleges, primary and secondary schools, community partners and governments, to advance awareness and institute changes to promote sustainable patterns of living.That the collaboration is time-consuming and carries some risks of failure is a fact. However, there are clear benefits for university students who become engaged in community-based research and work closely with community partners who would otherwise not be available to them within a standard academic program. Presumably, those same students will also help to advance the collective mission of the RCE.In the end, to quote Zoe Caron, Atlantic Coordinator for Sustainable Campuses, "students are the citizens of the university and need to flex their citizenship... In Canada especially, its so easy to make change if you just really go for it" (Elliott, 2007).
|
Multi-plant improvement programmes: a literature review and research agenda
|
[
"Knowledge transfer",
"Literature review",
"Process improvement",
"Global operations management",
"Improvement program",
"Production system"
] |
Summarize the following paper into structured abstract.
1 Introduction: Many multinational corporations (MNCs) have strategically used the steeply increasing globalisation of the past two decades to grow internationally through acquisitions, mergers and green field establishments in foreign markets. As economic conditions tighten and competition gets tougher, many MNCs find themselves struggling with a dispersed, heterogeneous and low-performing network of plants. Experiencing a legitimate need for continuous process improvement in all plants in the network, corporations seek to improve operational capabilities and, hence, increase the competitiveness of the MNC as a whole. With the knowledge that the ability to learn within international networks offers a potent source of competitive advantage (Shi and Gregory, 1998), the latest trend for process improvement sees MNCs going from plant-specific improvement projects to multi-plant improvement programmes (Netland, 2012).
2 Theoretical background and definition of scope: We are investigating the union of multi-plant coordination literature and process improvement literature. In order to define our scope, these two topics are now introduced.
3 Conceptual framework: The multi-disciplinary nature of the topic becomes apparent when reviewing influential theoretical studies in the broader field of knowledge and practice transfer in MNCs. This literature unveils two explanatory axes for how wide and deep multi-plant improvement programmes play out in subsidiaries - one stems primarily from international business and the other primarily from organisation science:
4 Research method: A research synthesis summarises and cumulates the findings of different studies on a topic (Tranfield et al., 2003). To synthesise the state-of-the-art on multi-plant improvement programmes, we undertook a systematic literature review. Starting where Prasad and Babbar (2000) ended their 1986-1997 review on international operations management, this review spans the 14 years from 1998 to 2011.
5 Presentation of findings: The review found 30 papers that explicitly address multi-plant improvement programmes in MNCs. The papers are summarised in the Appendix, which provides a short description of all the included papers in terms of publication channel and year, type of improvement programmes studied, methodological approach and research focus, and main finding.
6 Discussion and research agenda: This study has reviewed the recent literature on multi-plant improvement programmes. It seems clear from the covered literature that a new field is in the making and will establish itself with a continuous flow of high-quality studies, in high-level journals, using a variety of methodological approaches and theoretical perspectives. Future research should address the several gaps and shortcomings in the literature.
7 Conclusions: The past decade has seen an on-going trend among multinational manufacturing companies to implement multi-plant improvement programmes. Despite the evident popularity of such programmes among practitioners, the corresponding literature remains scarce and no coherent stream of literature has emerged to this date on this widespread phenomenon. Instead, research from several areas offers theoretical explanations and normative roadmaps for aspects of such efforts. This paper has brought together this research on multi-plant improvement programmes from 15 leading management journals to describe the current research frontier and suggest a research agenda for the future. We found a scattered interest across journals, where IJOPM still stands out as the primary professional journal for research on multi-plant improvement programmes.
|
Global careers in the Arabian Gulf: Understanding motives for self-initiated expatriation of the highly skilled, globally mobile professionals
|
[
"Expatriates",
"Labour market",
"Migrant workers"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: Increasingly, people's careers are becoming global (Tams and Arthur, 2010). A growing number of executives acquire global assignment experience, at least in Western contexts (Andresen et al., 2012; Briscoe et al., 2012; Reiche and Harzing, 2011). Contemporary global labor markets offer greater variety and choice for those considering global career moves (Baruch et al., 2013; Tharenou, 2015). While a significant amount of expatriation is managed by MNCs (Collings et al., 2007), self-initiated expatriation (SIE) remains a growing phenomenon (Al Ariss and Crowley-Henry, 2013; Doherty, 2013). However, it is not always clear what distinguishes SIE from other types of global moves, like migration (Andresen et al., 2014). Being proactive, though, is critical for creating positive career outcomes (Seibert et al., 2001). People may opt for SIE to improve both their generic and specific human capital (Gibbons and Waldman, 2004), either for themselves alone, or as a springboard to corporate careers, where experience in large MNCs is indispensable (Hamori and Koyuncu, 2011).
Theoretical underpinning of SIE: Prior literature informed our conceptualization of SIE in a number of ways. It is important to recognize that while SIE is a personal decision, it cannot be dealt with in isolation from the context. At the core of the decision of expatriates to stay (vs to return) in the host country lie the competing forces of push and pull, at the individual, organizational and cultural levels, as suggested by Baruch's (1995) push/pull model of international labor mobility (following Lewin's, 1951 field theory). It is important to examine the relevance of societal factors and, in particular, compare attitudes of people from a variety of origins in destinations other than Western countries.
Context: The UAE has one of the highest shares of foreign nationals within its borders, with expatriate residents forming 89 percent of the labor market (Forstenlechner and Rutledge, 2011). Labor market segmentation is largely based on ethnicity (Abdalla et al., 2010), although common, this perception is informal and not openly discussed. The hierarchy of prestige ranks UAE citizens first, then Western (Europeans and North Americans) expatriates, followed by Arab and other expatriates of professional and managerial talent. At the bottom of the labor market are the laborers (mostly from India, Pakistan, and the Philippines).
Method: To answer such questions we opted for a mixed research methodology, with an emphasis on qualitative research design (Pratt, 2009), and added quantitative analysis, which is mostly descriptive, following Edmondson and McManus (2007). We used theoretical constructs to structure empirical observations. The method applied fits well with studying global business environments (see Birkinshaw et al., 2011). As recently suggested, scholars should "engage with those living the phenomenon and attempt to understand it from their perspective" (Corley, 2015, p. 2). This is of particular relevance when investigating phenomena undergoing constant change (Gioia et al., 2013).
Results: General findings
Interpretation: The clusters of views and attitudes are mostly based on the original national or cultural background of the expatriate. Similar differentiation across cultural cluster was recently noted in similar contexts (Sikdar and Vel, 2011; Tong et al., 2012).
Discussion and conclusions: We conducted this study to identify factors influencing the motives of expatriates to come to peripheral countries (using the case of the Gulf state of the UAE). Using cases within context helps to facilitate knowledge creation (Bamberger and Pratt, 2010) and contribute to both theory progress and practical development.
Theoretical contributions: Our findings add to contemporary career theory and global HRM literature, which is a growing area of relevance in what is currently a fairly fragmented literature (Baruch et al., 2015; Lee et al., 2014). People of various origins and background are ready and willing to take their future in their own hands, by planning and executing non-traditional career moves (Arthur, 2008; Hall, 2004). The impetus for SIE is mostly individual, which lends support to the individualistic perspective of career studies, like the protean career (Hall, 2004). We expand the discourse of contemporary career theory and expatriation, and fits well with the theory of careers and labor markets as an eco-system (Baruch et al., 2016). SIE is a growing segment of the labor market that is characterized by individuals who plan and manage their career across national and cultural borders, benefiting from short- and long-term career gains and as a result, adding value to their employers and to the competitiveness of the countries in which they are employed. The length of time and the experience of the expatriation influence the self-identity of the expatriate (Scurry et al., 2013) and guide their future career options (Baruch et al., 2016).
Implications for individuals considering expatriation: Expatriates tend to gain broader managerial experience, and to be recruited or promoted into positions that would not be possible for them in their home countries. It should be noted, though, that this might cause adjustment issues on repatriation (see Lazarova and Caligiuri, 2002). Receiving a high salary that cannot be offered back home is a type of "Golden handcuffs" or "Golden cage," where people will be reluctant to return to their home country for financial reasoning (Richardson and McKenna, 2002). Our findings suggest that opting for expatriation in the Gulf can be a rewarding experience, leading to financial gains and new career opportunities. Subject to realistic understanding of the local conditions and habitudes, people can enjoy life and positive work environments. Depending on the long-term aim, people from different origins and cultural backgrounds will have different experiences.
Implications for business firms: We distinguish between implications for local firms and for MNCs which operate or plan to enter these markets. For global employers, we highlight areas where positive professional people-management can gain better "return on investment" if the expatriates are dealt with via appropriate practice. UAE enterprises can be effective in targeting, recruiting, and remunerating expatriates. For example, in recruiting future expatriates, "word of mouth" was typical. Thus, the practice of "referrals" may be applied effectively. This might be contentious in an environment where locally the practice of Wasta exists. However, the more formal use of networks might be incredibly valuable and something that could possibly be encouraged when it comes to the recruiting of the highly skilled.
Implications for policy options at the national level: Authorities need to understand that the highly skilled should be treated differently than the low skilled, in line with benchmarks from developed countries that actively distinguish between process and rules between skill groups. Despite the intense need and ongoing efforts to develop indigenous human capital (Ramady, 2013), we consider it important that policy makers should realize what is needed to fulfill this ongoing need. Based on our findings, we recommend the following policy options to increase the likelihood of attracting the highly skilled:
Limitations and recommendations for further research: The study took place in one country in the Arabian Gulf, and was qualitative in nature. Future studies may replicate it in other peripheral locations, and using quantitative research design. Our study is limited to professional workers and managers, but this was done on purpose as such populations are of high interest for both career and global labor market studies.
Conclusions: Our study manifests the contingency needed in analyzing issues of relevance for global audience. National and organizational competitiveness depend highly on aggregate human capital. To attract and retain global talent, countries in peripheral geo-locations should find specific niches of the labor market they can reach and connect with. We pointed out a need not only to focus on reasons for expatriation, but to expand the framework to understand reasons for staying in the host country. In the process we identified different clusters of attitudes and reasoning, mostly following ethnic and national origin of the expatriates. The positive attitude of local individuals and governments can be instrumental in keeping expatriates in the host country.
|
The impact of managerial commitment and Kaizen benefits on companies
|
[
"Kaizen",
"Educational level",
"Managerial commitment",
"Partial least squares (PLS)"
] |
Summarize the following paper into structured abstract.
1. Introduction: Western companies tend to manage their business activities by establishing short-term benefits, but this may prevent them from identifying and meeting beyond their immediate needs; in addition, short-term process planning can limit the corporate vision, quality levels of production and profitability.
2. Literature review: 2.1. Kaizen in the industry
3. Methodology: 3.1. Questionnaire design
4. Results: 4.1. Description of the sample
5. Conclusions and industrial implications: This research analyzes four latent variables associated with the implementation of Kaizen in Mexican maquiladoras. On the one hand, two of these variables concern activities carried out at the planning stage in terms of managerial commitment and professional development of human resources. On the other hand, the two remaining variables involve benefits obtained by companies, both economic and for human resources. From the assessment of these variables the following conclusions are thus proposed:
6. Future research: Results obtained from the model in Figure 2 show that R2 values for dependent latent variables (professional development of human resources, human resources benefits and economic benefits) are far from reaching the unit, which is the maximum value. As for future research, the following proposals can be considered to increase the explanatory quality of the model:
|
Raw vegetable salad consumers in full-service restaurants
|
[
"Consumer",
"Food consumption",
"Food service",
"Restaurants",
"Socio-demographic variables",
"Food behaviour"
] |
Summarize the following paper into structured abstract.
Introduction: In recent decades an increase has been observed in various countries in the number of meals eaten away-from-home as also in the number of establishments in the foodservice sector. It was estimated in 2009 that of the total amount spent on feeding, 47.5 and 31.0 percent was spent on food eaten away-from-home in the USA and Brazil, respectively (Economic Research Service, 2011; Instituto Brasileiro de Geografia e Estatistica (IBGE), 2010a). The data available for the European Union revealed a similar tendency (Mitchell, 2004).
Methods: Study design and sample
Results: The majority of the population interviewed consisted of women (57.1 percent). The greater proportion of the consumers were between 26 and 40 years of age (40.1 percent) with a monthly family income of up to ten minimum salaries (55.1 percent) and a university or post-graduate education (69.2 percent). However, the majority of the consumers (82.4 percent) had not graduated in an area of health or food (Table I).
Discussion: For the sample of individuals frequenting full-service restaurants, 52.3 percent reported taking meals in this modality of restaurant at least once a week. Among adult Americans, those who reported frequenting full-service restaurants represented 90 percent of the consumers interviewed, and the survey registered that 68 percent frequented this type of restaurant at least once a week in 1995-1996 (Duffey et al., 2007). In Brazil, among consumers who reported taking meals in full-service restaurants, 51.4 percent did so weekly in Rio de Janeiro and 54 percent rarely in Campinas (Castelo Branco et al., 2003; Sanches and Salay, 2011).
Conclusions: For the sample of individuals studied in the city of Campinas, those most accustomed to having their meals in full-service restaurants were male, and those with higher family incomes and higher educational levels. Subjects graduated in the areas of health or food showed a significantly higher frequency for the consumption of salads in full-service restaurants (lunch and dinner at weekends). Social desirability apparently had no influence on the responses referring to the studied frequencies.
|
Beyond accessibility: empowering mobility-impaired customers with motivation differentiation
|
[
"Motivation",
"Disabilities",
"Resorts",
"Self-determination theory",
"Effects comparison",
"Seemingly unrelated regression"
] |
Summarize the following paper into structured abstract.
1. Introduction: People with mobility impairments (PwMIs) is a fast-growing yet largely underrated travel market for hospitality/tourism businesses. In the USA alone, about 6.89 million adults are mobility impaired (U.S. Department of Health and Human Services, 2015), categorized as using aids such as wheelchairs or crutches because of their inability to walk, grasp or lift objects (NANDA International, 2012). They took 40 million trips annually in the USA and spent US$17.3bn on travel (Open Doors Organization, 2015). This market potential is multiplied considering PwMI typically travel with companions and the elderly population, as 27 per cent of people aged 65 and above in the USA are mobility challenged, whereas over 22 per cent globally will be over 65 years old by 2050 (World Health Statistics, 2016).
2. Literature review: 2.1 Self-determination theory-based motivation differentiation
3. Methodology: Data collection was conducted using Qualtrics surveys designed to measure PwMI's psychologicalneed satisfaction, self-determined versus controlled travel motivations and travel-pursuit dimensions based on a given resort-package scenario, along with control measures. To evaluate the contextual consistency of results, an extreme groups design (Allison et al., 1997; Preacher, 2015) was adopted to create two significantly contrasting contexts in travel challenge levels for result comparisons. As travel challenge levels are shaped by individual travel abilities and environmental accessibilities (McKercher and Darcy, 2018), to maximize the between-context challenge-level differences, the challenging context was created by assigning a less feasible package scenario to the sample PwMI group with weaker travel abilities (i.e. lower physical functionalities and travel frequencies), whereas the unchallenging context paired the more feasible scenario to the group with stronger travel abilities. To control for between-group differences other than challenge levels, control measures (see Instruments) were incorporated as covariates.
4. Conclusions: Intended as a promising supplement to the primary scholarly/industrial concentration on the facility/service accommodation of PwMI's travel pursuits, this study advocates empowering PwMI psychologically by intentionally cultivating superior motivations in travel-facilitating effectiveness and challenge resistance, which can be identified based on a proposed SDT-based motivation subdivision. The H1 examination supports the dominant effectiveness of self-determined motivations in facilitating PwMI's resort-travel attitudes and behavioral intentions given significant travel challenges. This echoes the non-travel SDT applications positing that self-determined motivations more saliently facilitate activity-pursuit attitudes and behavioral intentions when activities are difficult to achieve (Gonzalez-Cutre et al., 2018). The practical value of the SDT-based subdivision for identifying superior travel-facilitating motivations is thus confirmed (Q1).
5. Theoretical implications: This research extends the predominant purpose-defined travel motivation typologies with an SDT-based subdivision. Future identification of travel motivations thereby should not only be confined to what travel purposes are pursued but also to why pursuing them (i.e. self-determined or controlled), to gain greater power in explaining/predicting travel attitudes/behaviors. Moreover, in establishing the value and feasibility of cultivating superior travel motivations, it supplements the prevailing explorations of structural challenge removals with the possibility of psychologically empowering PwMI's travel pursuits.
6. Practical implications: It is crucial for hospitality/tourism businesses to identify strategies that effectively motivate PwMI to persist in travel attempts, especially when their PwMI-targeted accessibility/service offerings are gradually improving but not yet meeting market expectations. Study findings provide a valuable guide to more effective hospitality/tourism marketing for PwMI, by identifying and satisfying, or more powerfully, cultivating superior travel-facilitating motivations corresponding to both individual and environmental challenge levels. Although it is ideal to satisfy all personal travel purposes, it is only realistic to concentrate the limited marketing resources on the most effective and challenge-resistant motivations for the efficient empowerment of PwMI's travel pursuits. Such practices after validation may be extended to the general traveler market.
7. Limitations and future research: Limitations of this study include its adopted extreme groups design, which is limited regarding generalizing to more fine-sorted challenge levels than the random assignment of two package scenarios within each participant group. Although potential between-group differences are controlled as covariates, the randomized design can also more restrictively control for influences irrelevant to travel challenges.
|
Ethan learns to be a learning organization: Culture change prompts greater openness and empowerment
|
[
"Performance",
"Employee relations",
"Recruitment",
"Organizational Culture",
"Learning organization",
"Remuneration"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: An Indian design, sales and manufacturing company that has grown significantly over the past 35 years became a learning organization through an unusual set of circumstances.
Self-managed development, empowerment and creativity: In other words, HR is promoting the establishment of a learning organization. A number of senior managers are still struggling to introduce these concepts. In general, though, the idea of a learning organization is proving highly influential. Its concepts have stimulated debate, and there is growing acceptance of self-managed development, empowerment and creativity. There is much more evidence in the company of effective teamwork and the principles of total quality management. And HR specialists are encouraging the processes of reflection and self-improvement that lie at the heart of organizational improvement.
Selection and recruitment: The company recruits many of its employees through an agency. It advertises in the local press, with agencies and, sometimes, at assessment-center-type organizations for more senior positions. Applicants are tested on their verbal and numerical skills, and interviewed. Almost all operators join as temporary employees. The company endeavors to ensure that, in addition to specific job requirements, successful applicants' personalities fit the company ethos. A sense of humor, enthusiasm and flexibility are key requirements.
Annual performance appraisal: Each Ethan has an annual appraisal. Managers involve the individual in all stages of the process, and particularly in setting objectives and the steps to be taken to achieve them. Appraisal is usually conducted by the team leader, off-line and on a one-to-one basis. The format is explained and the importance of it being a two-way process emphasized. Employees can ask for more than one appraisal a year.
|
Responsible property investing: what the leaders are doing
|
[
"Corporate social responsibility",
"Sustainable design",
"Ethics"
] |
Summarize the following paper into structured abstract.
1. Introduction: Man really is the only animal that builds his terrarium around him as he goes and real estate is really the business of building that terrarium. So we have a tremendous ethical content, a tremendous social purpose (James A. Grasskamp, pioneer of modern real estate studies[1]).The purpose of this paper is to help those making investment decisions on existing commercial real estate portfolios understand how environmental, social and governance (ESG) issues impact the current value and prospective investment performance of the assets they own and manage. In our view, efforts to understand and respond to these issues constitute the practice of responsible property investing (RPI). We hope that this work will help property asset owners, managers and developers understand and react to financially sound RPI.As such issues grow in importance for governments, businesses, and society at large, they are increasingly influencing the context within which property investments are held and related decisions made. For example, if tenants exercise a preference for occupying more "sustainable" properties, then the income growth from such investments should prove superior to that from less sustainable, less desirable, stock. Similarly, if investors exercise the same preference, then less sustainable assets will prove less liquid, more risky and potentially less valuable than more sustainable assets. If new social standards based around improved sustainability lead to existing landlords having to improve the performance of their properties, then less sustainable assets will probably require greater expenditure and deliver poorer returns. Investors who preempt these new standards may be best placed to seize the opportunity they offer.Given this, it is the fiduciary responsibility of property investors to (at least) understand the implications of these issues and to seek economic ways to improve the sustainability of the assets they buy and hold.With this in mind, the PWG has brought together representatives from some of the foremost property investment organizations around the world, committed to improving the environmental and social performance or governance of their property portfolios. This paper collates examples of how they are meeting their social and fiduciary responsibilities while simultaneously "doing well by doing good". They provide robust evidence of emerging and innovative practice today, which we hope will become common practice tomorrow.In our view, responsible property investing (RPI) means property investment or management strategies that go beyond compliance with minimum legal requirements in order to address environmental, social and governance issues. Because so many factors relate to these issues, RPI touches upon literally dozens of property location, design, management, and investment strategies. To simplify things, we have grouped these strategies into ten underlying dimensions. Including these in building management decisions will improve an investor's performance on ESG issues:1. energy conservation - green power generation and purchasing, energy efficient design, or conservation retrofitting;2. environmental protection - water conservation, solid waste recycling, habitat protection 3. voluntary certifications - green building certification, certified sustainable wood finishes;4. public transport-oriented developments - transit-oriented development, walkable communities, mixed-use development;5. urban revitalization and adaptability - infill development, flexible interiors, brownfield redevelopment;6. health and safety - site security, avoidance of natural hazards, first aid readiness;7. worker wellbeing - plazas, childcare on premises, indoor environmental quality, barrier-free design;8. corporate citizenship - regulatory compliance, sustainability disclosure and reporting, independent boards, adoption of voluntary codes of ethical conduct, stakeholder engagement;9. social equity and community development - fair labor practices, affordable/social housing, community hiring and training; and10. local citizenship - quality design, minimum neighborhood impacts, considerate construction, community outreach, historic preservation, no undue influence on local governments.We have identified two types of financially sound RPI strategies: no cost and value added approaches. With the no cost approach, managers find ways to improve the social or environmental performance of their properties at zero added expense. Turning out the lights in unoccupied areas, for example, is a no-cost strategy that fights global warming and reduces energy bills. Value added strategies, on the other hand, require some initial financial outlays, but pay for themselves by either increasing net incomes (via higher rents or lower running costs) or reducing risk premiums (via lower environmental risks, less depreciation or less marketability risk). For example, designing in a childcare facility may cost more in architectural services and materials but the added costs may be offset by higher rents. Many of these measures have been shown to increase returns. In some instances, more research is needed to quantify their financial benefits.
2. What investors are doing: The following are examples of RPI strategies being employed by investors or asset managers today.2.1 Energy conservation
3. Final thoughts on responsible property investing: It is a truism that properties accommodate most human activity. However, the corollary of this is that properties are also the places where a significant proportion of CO2 emissions are generated. The Association for the Conservation of Energy in the UK estimates that, through their construction, use and demolition, built structures are the source for nearly 50 percent of such emissions. On this basis, any coherent strategy towards constraining and reducing CO2 emissions must place thought and action on the environmental impacts of properties at its core.Substantial and important work is already underway to identify practical ways and policy measures to ensure that newly constructed buildings are built and operated in environmentally sustainable ways. UNEP's own Sustainable Building and Construction Initiative is important in this regard. However, depending on economic and property market cycles, newly developed buildings typically replace up to 2-3 percent of the existing stock per annum. This means that any environmental program that focuses solely on new construction would leave untouched the current universe of built structures where most environmental and energy inefficiencies reside and, as such, make only slow progress in the crucial theatre of the built environment. Hence, there is a need for concerted thought and action to be given to finding ways to reduce the environmental impacts of the existing built stock. This is the specific subject area that the UNEP Finance Initiative's Property Working Group is committed to exploring. The complexities surrounding how properties are owned, leased and occupied are such that this requires specialist attention in dealing with the practical management and refurbishment of properties.
|
Real estate private equity: the case of US unlisted REITs
|
[
"Real estate",
"Equity capital",
"Property finance"
] |
Summarize the following paper into structured abstract.
1. Introduction: Private equity funding of institutional-grade commercial real estate in the US has historically come from wealthy individuals and pension funds. These sources remain dominate today as evidenced by the recent wave of real estate public-to-private transactions. Small-unit investors were provided the opportunity to purchase shares in publicly-traded portfolios of commercial real estate in 1960 in the US when the first REIT law was enacted and now in many countries throughout the world as the REIT concept proliferates.During the early 1990s, real estate fund sponsors in the US began raising equity for commercial real estate investment using broker-dealer channels. Replicating dividend networks established by mutual fund groups, sponsors offer real estate investment programs to the population of mostly small-unit investors who rely on the services of financial planners. Sponsors using broker-dealer networks to raise equity capital begin as private companies. While sometimes referred to as "private REITs", a select group of these companies and the mini-industry that now surrounds them, prefer the title "unlisted REIT" because sponsors follow the SEC's rules for publicly-traded REITs, but do not list shares on exchanges[1]. The success of the unlisted REIT approach is evidenced by the relatively high volume of equity flows to these companies, sometimes resulting in near-parity with inflows to publicly-traded REITs. During 2003, for example, the unlisted REIT group, raised $7 billion through broker-dealer channels while all publicly-traded REITs raised $8.1 billion through public stock offerings[2]. Similar broker-dealer networks are now used in the in US to facilitate tax-free exchanges through tenant-in-common (TIC) funds[3].1.1 Unintended consequences for unlisted REIT investors
2. Institutional detail: Because no published papers have appeared on unlisted REITs, we provide selected institutional details about how these businesses operate[5]. As shown in Figure 1, the ownership entity is embedded in a fairly complex network of relationships with service providers and regulators. The chain that connects REITs to investors is our main interest. This chain begins with the establishment of an in-house broker-dealer affiliate that develops contractual relationships with national and regional managing dealers (e.g. A.G. Edwards, UBS Warburg, and Paine Webber). Local financial planners acting as soliciting dealers with or without affiliations to these national and regional broker-dealers then sell investment programs of unlisted REITs to the public. Managing brokers may acquire due diligence reports from third-party companies that inform about the capability, competency, and integrity of unlisted REIT sponsors prior to agreeing to sell their investment programs.Individuals invest as little as $1,000 after being given the prospectus of an unlisted REIT. The investment of $1,000 is reduced by the fees owed to the network including the in-house broker-dealer company. For example, with a 10 percent fee, $900 is actually invested in real estate. Investment programs qualify as securities and therefore must conform to the requirements of the SEC, NASAA (in most states), and NASD. Also, the actions of broker-dealers in their relations with the public must conform to NASAA requirements and are regulated by NASD. External advisors - usually subsidiaries of unlisted REITs - carry out day-to-day operations including relationship management with regulators, attorneys, and accountants[6]. Unlisted REIT prospectuses contain language about how companies will exit their current form to provide investors with a return of capital. Sometime the language gives a specific date (i.e. on or before). In other instances the language gives the unlisted REIT considerable flexibility for this exit. As shown in Figure 1, four alternative exit events could occur - private market liquidation of properties, exchange listing, IPO as the combination of exchange listing with public capital infusion, and merger with an existing company. Unlisted REITs may hope to capitalize on a favorable private-market-to-public-market arbitrage opportunity during latter stages of the investment period.2.1 Investor appeal and criticisms
3. Modeling the problem of fixed share prices and wealth transfer: Our model captures how long-term investors who buy unlisted REIT shares with fixed offer prices are affected by the opportunistic actions of follow-on investors. Long-term investors in the model remain passive, buying shares of the unlisted REIT at origination and holding them until the unlisted REIT executes an exit either by liquidating in the private market, listing on a public exchange, or merging with another company. Follow-on investors behave in active ways, choosing whether or not to buy shares at a fixed offer price after properties held by the Unlisted REIT experience changes in values. Structuring the model in this way allows us to concentrate on the factors that enter into follow-on investors' decisions and isolate how their trading affects long-term investors' return on unlisted REIT investments.3.1 Model timeline
4. Investment behaviors of follow-on investors: We show above that long-term investors' profit is a function of the number of shares bought by follow-on investors, n1. We consider two scenarios: prohibited follow-on investment, and profit maximization under perfectly competitive follow-on investment. In the subsequent analysis of the two scenarios, we denote the equilibrium values of n1 as n1 and n1, respectively.4.1 Follow-on investment is prohibited
5. Empirical evidence of dividend policy distortion: In this section we provide evidence that the unlisted REIT structure leads to perverse dividend policies. These policies produce dividends well above comparable listed REITs and, more importantly, in excess of 100 percent of FFO. All data come from SNL Securities and cover the three-year period 2003 through 2005[9]. Prior to 2003, the unlisted REIT data were not considered adequate to form broad enough cross-sections by firm size and property type. Table III presents the sample of 11 unlisted REITs. All of these companies were started after 1995 and represent every major property type. Total asset size ranges from well under $1 billion to $8 billion. Unfortunately, dividend information is not available for all unlisted REITs during each year.The comparative analysis of unlisted REIT and listed REIT dividend dividends during the sample period begins with the formation of four shadow portfolios. The first portfolio (i.e. sector portfolio) consists of 41 listed REITs spanning every property type represented in the unlisted REIT sample. The firms were selected to achieve a balance by asset size. A list of these REITs and the REITs placed in other shadow portfolios appears in Table IV. To eliminate age bias, a second shadow portfolio (i.e. IPO portfolio) was formed that includes 23 firms with IPOs after 1996. This portfolio also is balanced by property type and asset size. Finally, we assembled two similarly balanced portfolios of differing sizes (i.e. small size and large size). The small size portfolio has 17 REITs with NAVs falling in the range of $400 million to $700 million and the large size portfolio consists of 30 firms with NAVs ranging from $1.2 billion to $3 billion.Table V provides comparisons of average dividend payout as a percent of FFO across the unlisted and listed REIT portfolios during 2003, 2004, and 2005. The dividend payout ratios for listed REITs fall within a normal range for REITs of between 60 and 75 percent (Farrell, 2006). Average payout ratios for the sample of unlisted REITs not only exceeded those of listed REITs by a sizeable margin, but were greater than 100 percent during all three years. This means that unlisted REITs paid out more in dividends than they received in cash flow over a three year period. Logic dictates that such a perverse dividend policy is unsustainable.
6. Conclusion: The idea to provide Federal tax-exempt status to business trusts that invest in commercial real estate became law in 1960 due to the efforts of a diverse group of special industry and government interests (Lynn and Bloomfield, 2003). Beyond the motivations of these groups to increase capital flows for real estate investment and rehabilitation was the intent of making available to retail investors certain advantages previously reserved for investors with greater resources. These advantages include diversification through pooling of funds, access to the benefits of professional investment advisors, and the ability to collectively finance properties of a scale that most individual investors could not undertake by themselves (Fass et al., 2004). As the REIT market matured, firms became larger and more public. Average investors seeking these opportunities had to enter the public securities environment, which they may neither find familiar nor trust.Recognizing that a segment of the retail investor population was not comfortable with prevailing commercial real estate investment options, sponsors of unlisted REITs reached retail investors through existing broker-dealer channels to provide an alternative way of accessing institutional grade real estate portfolios. The programs of unlisted REIT sponsors, therefore, directly align with the spirit of the original US REIT legislation. In addition, these programs demonstrated substantial appeal as evidenced by the amount of capital flowing to unlisted REIT sponsors during recent years through broker-dealer networks.During the initial phase in the maturation of unlisted REITs, critics emphasized the size of the fees earned by participants in the broker-dealer network and the integrity of sponsors. As market conditions change, investors' holding periods lengthen, and the exit strategies of unlisted REITs come to fruition, closer scrutiny is being directed to more general problems, such as conflict-of-interest issues and structural problems of the investment programs. In this paper we identify one structural flaw present in unlisted REIT investment programs - the fixed share price. Our model provides a mechanism for analyzing the consequences of fixed share prices on fees generated through the broker-dealer network and the profits and returns received by both long-term (i.e. those who invest early) and opportunistic follow-on (i.e. those who enter late) investor groups.The model solutions using a fixed share price assumption provide several insights. First, opportunistic follow-on investors participate only during up real estate markets as the intrinsic values of shares rise above fixed share prices. Second, during periods of rising property values and with perfectly competitive capital markets, follow-on investors will buy shares as long as intrinsic share value exceeds fixed share price, driving long-term investors' profit to the level of dividends. Profits in excess of dividends are absorbed by participants in the broker-dealer network. Third, a high contractual dividend level can be used to mitigate the wealth transfer from long-term investors. A comparative analysis of the dividend payout behaviors of listed and unlisted REITs during recent years reveals that unlisted REITs have paid dividends relative to FFO well in excess of listed REITs and also approximately 110 percent of their own FFOs. This perverse dividend policy follows from the fixed-share price structure adopted by these firms.The basic structure of unlisted REITs has many desirable features and should be preserved, but with modification to the fixed share price feature to limit investor conflicts. A convenient parallel to unlisted REIT unit investment in commercial real estate is the commingled fund (see Fosheim, 1995). Ennis (1996) describes the institutional arrangement as follows:In a straightforward process each asset in the commingled fund is appraised periodically. The value of the properties are then added together to arrive at the value for the fund. As properties change in value, the fund value is adjusted. Commingled fund values are quoted as if the properties were sold at appraised value (at least on average across the properties in the fund) ... Intrinsic value is determined only by the asset appraisal methodology controlled by the fund manager (p. 36).Analysts following publicly-traded REITs use simpler and less costly NAV calculations (relative to appraisals) to mark the values of REIT assets to market. Regardless of the approach taken, the unlisted REIT business model needs adjustment to avoid the undesirable consequences arising from fixed offer pricing.
|
Knowledge management in law firm business
|
[
"Knowledge management",
"Communication technologies",
"Small enterprises",
"Lawyers",
"Norway"
] |
Summarize the following paper into structured abstract.
1 Introduction: A law firm can be understood as a social community specializing in the speed and efficiency in the creation and transfer of legal knowledge (Nahapiet and Ghoshal, 1998). In recent years, law firms have been undergoing significant changes. Most large law firms have switched from a professional model to a corporate business model, employing competitive strategies and a profit orientation. To be successful, law firms need appropriate resources, which must be bundled and leveraged to implement a strategy effectively. According to Hitt et al. (2007), those valuable resources must be expertly managed to achieve profitable growth.This paper introduces the evolutionary perspective of stages of growth models. Such models are helpful to determine where an organization is at, where it came from, and it what direction it is moving in terms of interoperability with other organizations. Stages of growth imply that there is a cumulative improvement over time, where continuous struggle and successes are more important than paradigm shifts.
2 Law firm business: Many law firms have transformed themselves from a professional model to a corporate business model. Knowledge is perceived as the resource on which the business is based. Unique, non-imitable, combinable, and exploitable knowledge is providing competitive advantage. Such valuable resources must be expertly managed to achieve profitable growth. In particular, idiosyncratic, intangible resources of the firm, specifically human capital and social capital, are crucial for law firms (Hitt et al., 2007).Their primary resources stem from the human capital and social capital of the individuals employed within. At the individual level, human capital attributes include education, experience, and skills. For the firm, human capital can be conceptualized as the aggregate of employees' knowledge and skills. In addition to formal education, professionals learn via work experiences, including working with others within their own firm, cooperating with other firms, and providing services to clients. Social capital, defined as networks of relationships that provide value to the participant/holder, constitutes a valuable resource for the conduct of business (Hitt et al., 2007).Many law firms are now large multinational businesses. In 2006, the largest law firm in the world, Clifford chance based in the UK, had 3,000 lawyers in 19 countries and gross revenue of $1,500 million. Effective management of large law firms has become critical, and the managing partners have learned that while they might be great lawyers, they may not have the skills to be professional executives. Consequently, many firms have hired executives with a business background (Hitt et al., 2007).McKenna (2007) surveyed law firm leaders on the burning issues that they were facing at their firms. Competition in all its various forms was on the minds of most respondents. In particular, for firms seeking to corner the most lucrative work, league tables and other rankings have grown in importance. Competition implies rivalry among firms in many respects. For example, many law firm leaders talked about the competition for talent. As law firms are facing increasing competition to get the best work, they need the best talent. Top students develop into good knowledge workers that more often are on the move.Law firms are typically structured as partnerships. Attorneys become partners via up-or-out promotion contests. In these contests, employers hire attorneys as associates for a predetermined time period or on a permanent basis. At the end of this time period, associates are either invited to become equity partners (firm owners), are fired or stay on as regular employees (Rebitzer and Taylor, 2007).The key asset of a law firm is knowledge, in the form of working relationships senior attorneys have established with clients. Although such assets cannot be bought and sold like conventional capital, law firms can preserve the value of assets by organizing themselves as partnerships in which senior attorneys essentially hand-off key assets to succeeding generations of junior attorneys (Rebitzer and Taylor, 2007).
3 Lawyers as knowledge workers: Coinciding with a move to professional management, law firms have increased their emphasis on billable hours for attorneys. Hitt et al. (2007) found that under the billable hour model, there is often a negative incentive for being creative and finding a quick solution to a dispute or complex legal issue, as these generate a tremendous number of billable hours. The pressure to bill hours is high, especially because associate and "non-rainmaker" partner bonuses are based on the number of annual hours billed.Lawyers can be defined as knowledge workers. They are professionals who have gained knowledge through formal education (explicit) and through learning (tacit). Often, there is some variation in the quality of their education and learning. The value of professionals' education tends to hold throughout their careers. For example, lawyers in Norway are asked whether they got the good grade of "laud", even 30 years after graduation. Professionals' prestige (based partly on the institutions from which they obtained their education) is a valuable organizational resource because of the elite social networks that provide access to valuable external resources for the firm (Hitt et al., 2001).After completing their advanced educational requirements, most professionals enter their careers as associates in law. In this role, they continue to learn and thus, they gain significant tacit knowledge through "learning by doing". Therefore, they largely bring explicit knowledge derived from formal education into their firms and build tacit knowledge through experience (Hitt et al., 2001).Most professional service firms use a partnership form of organization. In such a framework, those who are highly effective in using and applying knowledge are eventually rewarded with partner status, and thus own stakes in a firm. On their road to partnership, these professionals acquire considerable knowledge, much of which is tacit. Thus, by the time professionals achieve partnership, they have built human capital in the form of individual skills (Hitt et al., 2001).Since law is precedent-driven, its practitioners are heavily invested in knowing how things have been done before. Jones (2000) found that many attorneys, therefore, are already oriented toward the basic premises of knowledge management, though they have been practicing it on a more individualized basis and without the help of technology and virtual collaboration. As such, a knowledge management initiative could find the areas where lawyers are already sharing information and then introduce modern technology to support this information sharing to make it for effective.Lawyers work in law firms, and law firms belong to the legal industry. According to Becker et al. (2001), the legal industry will change rapidly because of three important trends. First, global companies increasingly seek out law firms that can provide consistent support at all business locations and integrated cross-border assistance for significant mergers and acquisitions as well as capital-market transactions. Second, client loyalty is decreasing as companies increasingly base purchases of legal services on a more objective assessment of their value, defined as benefits net of price. Finally, new competitors have entered the market, such as accounting firms and internet-based legal services firms.Attorneys are knowledge workers, who differ from other employees because they essentially carry around key firm assets in their brains. The knowledge assets these lawyers control - an understanding of the needs and interests of clients - are obviously of greatest value when used with specific clients. According to Rebitzer and Taylor (2007), this specificity gives individual attorneys considerable leverage over their employers. By threatening to "grab and leave" with an important client, attorneys can leverage an increased share of their firm's revenues.
4 Knowledge organization: Bennet and Bennet (2005a) define knowledge organizations as complex adaptive systems composed of a large number of self-organizing components that seek to maximize their own goals but operate according to rules in the context of relationships with other components. In an intelligent complex adaptive system, the agents are people. The systems (organizations) are frequently composed of hierarchical levels of self-organizing agents (or knowledge workers), which can take the forms of teams, divisions or other structures that have common bonds. Thus, while the components (knowledge workers) are self-organizing, they are not independent from the system they comprise (the professional organization).Knowledge is often referred to as information combined with interpretation, reflection and context. In cybernetics, knowledge is defined as a reducer of complexity or as a relation to predict and to select those actions that are necessary in establishing a competitive advantage for organizational survival. That is, knowledge is the capability to draw distinctions, within a domain of actions. According to the knowledge-based view of the organization, the uniqueness of an organization's knowledge plays a fundamental role in its sustained ability to perform and succeed.According to the knowledge-based theory of the firm, knowledge is the main resource for a firm's competitive advantage. Knowledge is the primary driver of a firm's value. Performance differences across firms can be attributed to the variance in the firms' strategic knowledge. Strategic knowledge is characterized by being valuable, unique, rare, non-imitable, non-substitutable, non-transferable, combinable and exploitable. Unlike other inert organizational resources, the application of existing knowledge has the potential to generate new knowledge (Garud and Kumaraswamy, 2005).Inherently, however, knowledge resides within individuals and, more specifically in the employees who create, recognize, archive, access and apply knowledge in carrying out their tasks. Consequently, the movement of knowledge across individual and organizational boundaries is dependent on employees' knowledge-sharing behaviours. Bock et al. (2005) found that extensive knowledge sharing within organizations still appears to be the exception rather than the rule.The knowledge organization is very different from the bureaucratic organization. For example, the knowledge organization's focus on flexibility and customer response is very different from the bureaucracy's focus on organizational stability and the accuracy and repetitiveness of internal processes. In the knowledge organization, current practices emphasize using the ideas and capabilities of employees to improve decision making and organizational effectiveness. In contrast, bureaucracies utilize autocratic decision making by senior leadership with unquestioned execution by the workforce (Bennet and Bennet, 2005b).In knowledge organizations, transformational and charismatic leadership is an influential mode of leadership that is associated with high levels of individual and organizational performance. Leadership effectiveness is critically contingent on, and often defined in terms of, leaders' ability to motivate followers toward collective goals or a collective mission or vision (Kark and Dijk, 2007).Uretsky (2001) argues that the real knowledge organization is the learning organization. A learning organization is one that changes as a result of its experiences. Under the best of circumstances, these changes result in performance improvements. The phrases knowledge and learning organizations are usually (but not necessarily) used to describe service organizations. This is because most, if not all, of the value of these organizations comes from how well their professionals learn from the environment, diagnose problems, and then work with clients or customers to improve their situations. The problems with which they work are frequently ambiguous and unstructured. The information, skills, and experience needed to address these problems vary with work cases. A typical example is detectives in police investigations.Similarly, Bennet and Bennet (2005b) argue that learning and knowledge will have become two of the three most important emergent characteristics of the future world-class organization. Learning will be continuous and widespread, utilizing mentoring, classroom and distance learning and will likely be self-managed with strong infrastructure support. The creation, storage, transfer and application of knowledge will have been refined and developed such that it becomes a major resource of the organization as it satisfies customers and adapts to environmental competitive forces and opportunities.The third characteristic of future knowledge organizations will be that of organizational intelligence. Organizational intelligence is the ability of an organization to perceive, interpret and respond to its environment in a manner that meets its goals while satisfying multiple stakeholders. Intelligent behaviour may be defined as being well prepared, providing excellent outcome-oriented thinking, choosing appropriate postures, and making outstanding decisions. Intelligent behaviour includes acquiring knowledge continuously from all available resources and building it into an integrated picture, bringing together seemingly unrelated information to create new and unusual perspectives and to understand the surrounding world (Bennet and Bennet, 2005b).According to Bennet and Bennet (2005a), designing the knowledge organization of the future implies development of an intelligent complex adaptive system. In response to an environment of rapid change, increasing complexity and great uncertainty, the organization of the future must become an adaptive organic business. The intelligent complex adaptive system will enter into a symbiotic relationship with its cooperative enterprise, virtual alliances and external environment, while simultaneously retaining unity of purpose and effective identification and selection of incoming threats and opportunities.In the knowledge organization, innovation and creativity are of critical importance. The literature on creativity provides a view of organizing for innovation by focusing on how individuals and teams come to shape knowledge in unique ways. Innovation consists of the creative generation of a new idea and the implementation of the idea into a valuable product, and thus creativity feeds innovation and is particularly critical in complex and interdependent work. Taylor and Greve (2006) argue that creativity can be viewed as the Stage 1 of the overall innovation process.Innovative solutions in the knowledge organization arise from diverse knowledge, processes that allow for creativity, and tasks directed toward creative solutions. Creativity requires application of deep knowledge because knowledge workers must understand the knowledge domain to push its boundaries. Team creativity likewise relies on tapping into the diverse knowledge of a team's members (Taylor and Greve, 2006).
5 Stages of growth: Stages of knowledge management technology (KMT) is a relative concept concerned with IT's ability to process information for knowledge work. IT at later stages is more useful to knowledge work than IT at earlier stages. The relative concept implies that IT is more directly involved in knowledge work at higher stages, and that IT is able to support more advanced knowledge work at higher stages.The KMT stage model consists of four stages. Stage 1 is general IT support for knowledge workers. This includes word processing, spreadsheets and e-mail. Stage 2 is information about knowledge sources. An information system stores information about who knows what within the firm and outside the firm. The system does not store what they actually know. A typical example is the company intranet. Stage 3 is information representing knowledge. The system stores what knowledge workers know in terms of information. A typical example is a database. The fourth and final stage is information processing. An information system uses information to evaluate situations. A typical example here is an expert system.Stages of IT support in knowledge management are useful for identifying the current situation as well as planning for future applications in the firm. Each stage is described in the following (Figure 1):* Stage 1. Tools for end-users are made available to knowledge workers in the simplest stage this means a capable networked PC on every desk or in every briefcase, with standardized personal productivity tools (word processing, presentation software) so that documents can be exchanged easily throughout a company. More complex and functional desktop infrastructures can also be the basis for the same types of knowledge support. Stage 1 is recognized by widespread dissemination and use of end-user tools among knowledge workers in the company. For example, lawyers in a law firm will in this stage use word processing, spreadsheets, legal databases, presentation software, and scheduling programs.Stage 1 can be labelled end-user-tools, lawyer-to-technology or people-to-technology as information technology provides knowledge workers with tools that improve personal efficiency.* Stage 2. Information about who knows what is made available to all people in the firm and to selected outside partners. Search engines should enable work with a thesaurus, since the terminology in which expertise is sought may not always match the terms the expert uses to classify that expertise.According to Alavi and Leidner (2001), the creation of corporate directories, also referred to as the mapping of internal expertise, is a common application of KMT. Since much knowledge in an organization remains non-codified, mapping the internal expertise is a potentially useful application of technology to enable easy identification of knowledgeable persons.Here, we find the cartographic school of knowledge management (Earl, 2001), which is concerned with mapping organizational knowledge. It aims to record and disclose who in the organization knows what by building knowledge directories. Often called Yellow Pages, the principal idea is to make sure knowledgeable people in the organization are accessible to others for advice, consultation, or knowledge exchange. Knowledge-oriented directories are not so much repositories of knowledge-based information as gateways to knowledge, and the knowledge is as likely to be tacit as explicit.Information about who knows what is sometimes called metadata, representing knowledge about where the knowledge resides. Providing taxonomies or organizational knowledge maps enables individuals to rapidly locate the individual who has the needed knowledge, more rapidly than would be possible without such IT-based support.Stage 2 can be labelled who-knows-what, lawyer-to-lawyer or people-to-people as knowledge workers use information technology to find other knowledge workers.* Stage 3. Information from knowledge workers is stored and made available to everyone in the firm and to designated external partners. Data-mining techniques can be applied here to find relevant information and combine information in data warehouses. On a broader basis, search engines are web browsers and server software that operate with a thesaurus, since the terminology in which expertise is sought may not always match the terms used by the expert to classify that expertise.One starting approach in Stage 3 is to store project reports, notes, recommendations and letters from each knowledge worker in the firm. Over time, this material will grow fast, making it necessary for a librarian or a chief knowledge officer to organize it. In a law firm, all client cases will be classified and stored in databases using software such as Lotus Notes.An essential contribution that IT can make is the provision of shared databases across tasks, levels, entities and geographies to all knowledge workers throughout a process (Earl, 2001).According to Alavi and Leidner (2001), one survey found that 74 per cent of respondents believed that their organization's best knowledge was inaccessible and 68 per cent thought that mistakes were reproduced several times. Such a perception of failure to apply existing knowledge is an incentive for mapping, codifying and storing information derived from internal expertise.Stage 3 can be labelled what-they-know, lawyer-to-information or people-to-docs as information technology provides knowledge workers with access to information that is typically stored in documents. Examples of documents are contracts and agreements, reports, manuals and handbooks, business forms, letters, memos, articles, drawings, blueprints, photographs, e-mail and voice mail messages, video clips, script and visuals from presentations, policy statements, computer printouts and transcripts from meetings.* Stage 4. Information systems solving knowledge problems are made available to knowledge workers and solution seekers. Artificial intelligence is applied in these systems. For example, neural networks are statistically oriented tools that excel at using data to classify cases into one category or another. Another example is expert systems that can enable the knowledge of one or a few experts to be used by a much broader group of workers requiring the knowledge.According to Alavi and Leidner (2001), an insurance company was faced with the commoditization of its market and declining profits. The company found that applying the best decision-making expertise via a new underwriting process, supported by a knowledge management system based on best practices, enabled it to move into profitable niche markets and, hence, to increase income.According to Grover and Davenport (2001), artificial intelligence is applied in rule-based systems, and more commonly, case-based systems are used to capture and provide access to resolutions of customer service problems, legal knowledge, new product development knowledge and many other types of knowledge.Knowledge is explicated and formalized during the knowledge codification phase that took place in Stage 3. Codification of tacit knowledge is facilitated by mechanisms that formalize and embed it in documents, software and systems. However, the higher the tacit elements of the knowledge, the more difficult it is to codify. Codification of complex knowledge frequently relies on information technology. Expert systems, decision-support systems, document management systems, search engines and relational database tools represent some of the technological solutions developed to support this phase of knowledge management. Consequently, advanced codification of knowledge emerges in Stage 4, rather than in Stage 3, because expert systems and other artificial intelligence systems have to be applied to be successful.Stage 4 can be labelled how-they-think. lawyer-to-application or people-to-systems where the system is intended to help solve a knowledge problem.
6 Empirical results: The largest law firms in Norway were obtained from the web site (www.paragrafen.no). This web site lists all law firms in Norway that have a home page on the internet. The largest law firms were selected by identifying all law firms that had at least five lawyers in the firm. This procedure resulted in a total of 102 law firms. It was possible to obtain e-mail addresses for managing directors/chief executive officers in 95 of these law firms by contacting the firms. Most law firms in Norway are small. While KMT for sharing information is dependent on a minimum number of lawyers to make sense, only law firms with a minimum of five lawyers were selected for this survey.Questionnaires were prepared and sent to the chief executive officer (CEO) in each firm. The questionnaire was developed in QuestBack. QuestBack is an online tool for electronic research. The service is built around three modules: QuestDesigner to create and publish surveys, QuestReporter for analysis of incoming responses and QuestManager to administer ongoing QuestBack initiatives (www.questback.com). QuestBack has a reminder function, which was used for two follow-ups about one week and two weeks after the date of the initial mailings. Five firms declined participation citing that the questionnaire was too long. A total of 19 firms, providing a response rate of 20 per cent, returned useable responses.Characteristics of respondents are listed in Table I. Although most respondents indicated the job title of lawyer, their current position was managing partner or chief executive officer. The average responding law firm had a total of 43 lawyers, which by Norwegian standards indicate large law firms. In total, 14 of these lawyers were partners in the firm. The IT budget constituted 2.3 per cent of the income budget, while IT staff was 1.7 per cent of total staff in the average firm.Table II shows the number of responding firms currently operating in each stage of growth. This is based on the part of the survey instrument describing extensively the four stages of growth. Generally, the results show that what-they-know occurs most often, followed by who-knows-what and end-user-tools. Only one firm reported Stage 4 of how-they-think.Table III shows the various paths of evolution reported by the respondent firms. Unfortunately, only eight out of 19 respondents filled in this part of the questionnaire. As expected, the path of evolution generally proceeds from end-user-tools to who-knows-what to what-they-know. This was the case for three respondents. However, the remaining five respondents show varying patterns of reciprocal behaviour as shown in Figure 2.
7 Conclusion: In this paper, on KMT in law firms, an exploratory study of the stages of growth model was reported. The sample size of 19 firms does not yield any significant empirical study results other than preparing the ground for future empirical studies. However, law firms are undergoing significant changes and hence making this research important. Law firms are emerging as knowledge organizations in the legal knowledge business. Knowledge is their strategic resource that must be managed to achieve profitable growth. To support knowledge work by lawyers, law firms have implemented knowledge management systems. In this paper, a stage model for KMT in law firms was presented. The four stages are lawyer-to-technology, lawyer-to-lawyer, lawyer-to-information and lawyer-to-application, respectively.
|
Revisiting the New Zealand apple industry: the impacts of change
|
[
"Agriculture",
"Fruits",
"New Zealand"
] |
Summarize the following paper into structured abstract.
Introduction: The pip fruit sector is a significant industry in New Zealand, the efficiency with which the industry uses its resources has been said to impact on the economy as a whole (NZBR, 2001). Unfortunately, recent times have seen New Zealand face a number of challenges as an apple-producing nation, with many growers now facing financial ruin (MAF, 2002). As a result, the number of growers has dropped substantially from approximately 1,500 in 1998 to around 900 in the 2005 season (Stringleman, 2005). With the majority of growers failing to cover their costs with their 2005 harvest and after barely breaking even in 2004, industry analysts predict this trend will continue failing a substantial and widespread industry pick up in market returns next season (Stringleman, 2005).This paper examines the state of the New Zealand apple industry from the grower's perspective. Following on from a previous study be Fitzgerald (2003), it addresses the effects of a number of changes which have taken place in the industry over the past decade and contributed to the position growers find themselves faced with today. These have ranged from daily operational issues such as packaging requirements, spraying regimes, quality standards, and technological improvements, to increased competition in Northern Hemisphere markets and the deregulation of the New Zealand apple industry (Fitzgerald, 2003). A sample of growers from one of New Zealand's major growing regions has been used to empirically assess the impact of these changes and gain further insight into the future outlook for New Zealand apple growers.
Literature review: New Zealand is a relative newcomer on the world apple scene; most of the industry's growth has taken place in the past 30 years. In that time, New Zealand production has tripled to over 20 million cartons/year, accounting for about one-third of the Southern Hemisphere crop and about 1 percent of world production (Ferree et al., 1999). The majority of orchards are owned and operated by families as a lifestyle-come-self-employment opportunity (Fitzgerald, 2003). There are currently around 900 orchards in New Zealand, operating in two large and seven smaller growing regions. The Nelson region in the south island and Hawke's Bay in the north island are the traditional areas for apple production in New Zealand accounting for approximately 30 and 55 percent of the countries total production respectively (Ferree et al., 1999).In recent years, the New Zealand apple industry has fallen upon hard times; a number of paradigm shifts have challenged the country's apple growers, leaving many staring in the face of financial ruin. Several factors including a world oversupply of fresh fruit, the increased bargaining power of retailers through the emergence of the supermarket industry and changes to the statutory framework governing the industry have contributed to the current position in which growers have found themselves (Fitzgerald, 2003; Stringleman, 2005).Industry deregulation
Methodology: This research has focused on apple growers from the greater Nelson district, one of New Zealand's primary growing regions (including the areas of Riwaka, Brooklyn, Mariri, Lower Moutere and the Waimea plains). A sample of orchardists was obtained using a snowballing technique, with the researchers existing contacts within the industry providing an adequate starting point. All of the respondents were involved with the apple industry either as current orchard owners, or those in a position of corporate control (e.g. Business Manager).The research was conducted using structured interviews. Of the 30 responses achieved, 29 were administered in face-to-face interviews and one was completed over the telephone. In addition to this, an adapted set of the questions was used to interview a nurseryman (a supplier of apple trees to orchards), in order to provide the researchers with an informed third party perspective.
Findings: Current state of the industry
Limitations and future research: The major limitation this study is its sample, firstly as it was taken from only one of the country's many growing regions. In order to gain a broader understanding of orchardists' perspectives on the current state of the New Zealand apple industry a wider sample should be used. A comparison of perspectives between regions would be a valuable addition as it would allow for an analysis, of how different regions, have fared, under deregulation (Table I).Further, the research sample consisted only of respondents who were still involved in the industry; this meant that the perspectives of those growers who have already exited the market following deregulation were not gained. This group would provide an insightful viewpoint, as it is likely that they would have a differing opinion to growers that are still in the industry.The timing of this research is another issue, the study being conducted during what is a critical period for most growers. Many were in the process of deciding whether or not to reinvest in the 2005/2006 season. Therefore it is possible that some of the respondents have since exited the industry.
Summary and conclusion: New Zealand apple growers have experienced a large number of changes over the past decade, some for the better and others making for decline, however regardless of these individual effects the industry is now identified as being in a state of turmoil. Following successive seasons with little financial return, most growers now face the reality that without a substantial pick up in the market next season they will have little choice but to close down their operations. While it may be that the uncontrollable factor of world competition eventually decides their fate, growers have identified a number of improvements (including improved industry cooperation, the introduction of new varieties, renewed quality standards and government assistance) that at the least will give them a fighting chance for survival.
|
Perception is reality: change leadership and work engagement
|
[
"Transformational leadership",
"Mediation",
"Work engagement",
"Change leadership",
"Organizational change",
"Employee perceptions"
] |
Summarize the following paper into structured abstract.
1. Introduction: The world is dynamic. Globally, leaders assess the environment and enact organizational change to pursue opportunities and conquer challenges. Organizations that adapt and innovate remain viable. However, enacting change has had less than a stellar track record. Consider the following examples. The America Online/Time Warner Cable merger resulted in an estimated loss of $100 billion (Dumon, 2008; McGrath, 2015). Boeing's 787 Dreamliner project with an estimated cost and development timeline of $6 billion over four years instead cost $32 billion over eight years (Ausick, 2014). The UK government canceled its largest civilian information technology (IT) project after spending $4 billion and ten years on it (Gibbons, 2015). Examples like this illustrate that failed change is costly with reported failure rates of 70 percent (Beer and Nohria, 2000) to 20 percent (Weiner et al., 2008).
2. Literature review and hypotheses: The literature approaches change leadership from either an organizational change theory (OCT) or leadership theory perspective (Herold et al., 2008). OCT proposes the need for change as originating from external and internal drivers, thereby focusing on the relationship between the organization and its environment. Resource dependency theory (RDT) (Hillman et al., 2009) and IT (Scott, 2007) are two OCTs identifying change drivers.
3. Methods: 3.1 Sample
4. Results: 4.1 Qualitative analysis
5. Discussion: 5.1 Implications for practice
|
Is there a silver bullet to career success for women?
|
[
"Diversity",
"Engagement and learning",
"Management development",
"Gender difference",
"Progression of women",
"Retention of women"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: Fixed Graphic
Men and women are not the same: Review of the literature suggests that the key to career success for female talent is somewhere else. All of these diversity initiatives start from the premise that men and women are equal. I agree that men and women are equal, yet that does not mean we are the same. Certainly, women can do the same jobs, should be paid the same and are able to achieve the same results. However, they may well use a different route to that same result. In addition, women are inspired and motivated differently.
Top sports coaches learn to adapt to women's needs: Sports coaches of top teams, for instance, find that challenging men is a useful strategy to motivate a team to perform better. Getting angry, setting challenging targets or setting up internal competitions are all effective ways to motivate men. It usually spurs them on to push on, aim higher, fight back and show what they are worth.
Research finds that men and women compete differently: The difference in response between men and women can probably be explained by findings in literature about how men and women compete and find security, such as described by Delfos (2010) in her book, De Schoonheid van het. Men compete on being the biggest, the best and the strongest. They find their security in being at the top - as when you are at the top, no one will attack you.
Line managers need to change: Interestingly, these results are supported by the latest research into recruiting, retaining and advancing women. Research from McKinsey (2013), KPMG (2015) and the 30%club, KPMG, YSC (2013), all show that changing culture and line management is essential for women's progression. What line managers need to do is something I have called: "Gender Smart Leadership". It means adapting their style to gender needs. This may well be essential for the leader of the twenty-first century who works in an organization where over 50 per cent of the employees are women.
Gender Smart leadership in practice: When challenging women to progress, Gender Smart leaders would ask a woman to apply for a new position or promotion, rather than wait till she puts herself forward. This requires that managers regularly check-in with women to learn about their ambitions, and encourage them to move on. The manager can, for instance, use evidence gathered of their achievements, and a personal opinion of why the manager thinks they are ready for the next role (Table I).
Conclusion: Rather than looking for a silver bullet, organizations need to realize that they were originally designed for men. Women are different and organizations will need to adapt to maximize the progress and performance of women as well as men.
|
What firm characteristics determine women
'
s employment in manufacturing? Evidence from Bangladesh
|
[
"Manufacturing firms",
"Bangladesh",
"Female employment"
] |
Summarize the following paper into structured abstract.
1. Introduction: On 24 April 2013, Rana Plaza, an eight-storey commercial building in Savar, a sub-district of the Greater Dhaka Area, Bangladesh, collapsed. The building, owned by the family of a prominent politician, housed a large number of garment factories that employed approximately 5,000 people, of whom 1,129 died and 2,515 were seriously injured. In the days that followed this, the deadliest garment-factory accident in history, garment workers across the industrial areas of Dhaka, Chittagong and Gazipur rioted (according to a report on the BBC website from 3 May 2013; www.bbc.com/news/world-asia-22394094). However, the uproar did not end in Bangladesh, with political leaders, NGOs and religious organisations around the world not only criticising working conditions in the country, but also criticising multinational garment brands such as Benetton, Mango and Walmart for engaging "sweatshops" to manufacture their clothes. In the immediate aftermath of this tragedy, yet another terrible statistic emerged: more than half of the victims were women, children and many more who were at nursing facilities in the building (Nelson, 2013).
2. The context of Bangladesh: Bangladesh is part of a region that practices extreme patriarchy. The societies in this part of Asia tend to be characterised by the practice of female seclusion, patri-lineal principles of descent and inheritance, patrilocal principles of marriage and strict patriarchal authority structures within the family. Restrictions on women's mobility in the public domain mean that they work either as unpaid family labour or in forms of paid work that can be carried out within the home. The invisibility of such work has meant that the female labour participation rates in these regions have tended to be extremely low. For example, official labour statistics show that women's share of the total employment in 1995 was five million (14 per cent), increasing to 16.2 million (30 per cent) by 2009 (BBS, 1996, 2011).
3. Data and variables: The data used in this study were obtained from the World Bank's Enterprise Surveys. The surveys provide the most comprehensive firm-level panel data in emerging markets and developing countries, and include firm-level characteristics, gendered employment, annual sales, workforce composition, infrastructure, innovation and technology, business-government relationships and performance measures. In Bangladesh, the first wave of the survey was carried out in 2007, while the second and third waves were conducted in 2011 and 2013, respectively[2]. The survey respondents were the business owners and managers of 120 manufacturing firms that were interviewed in all three rounds, resulting in 360 observations. Of these firms, 117 are located primarily in the two main cities of Bangladesh: Dhaka and Chittagong. The manufacturing subsectors that are covered by the data set include food, textiles and garments, leather, chemicals and pharmaceuticals, electrical and other manufacturing. Data were pooled for the three years 2007, 2011 and 2013. After dropping observations with missing values for the dependant variables and other covariates, we end up with 303 observations. The estimating sample is an unbalanced panel, with an average of 2.9 observations per firm[3].
4. Empirical strategy: In this section, we specify the statistical model that is used to estimate the determinants of female employment in manufacturing. The fractional nature of the dependent variable necessitates the use of the fractional logit model proposed by Papke and Wooldridge (1996)[9]. As was discussed above, our approach is closely related to that of Fakih and Ghazalian (2015). However, we extend this previous work in two significant ways. First, exploiting the panel dimension of our data, we estimate our specification with fixed effect model in order to control for the various unobservable and time-invariant features of the firm that tend to be correlated with female employment. This is done as follows.
5. Estimation: This section discusses the estimation results of the specifications explained in the previous section. Table III shows the results for the model that explains the variation in the fraction of female full-time permanent workers. The results suggest that medium-sized and large firms tend to employ larger fractions of female permanent workers. This finding is consistent with our previous interpretation of the firm size variable, but runs counter to the results of Fakih and Ghazalian (2015), who find that, in MENA's manufacturing sector, full-time female workers prefer to work in smaller rather than larger firms. This could be due to factors such as more complex technologies and more unpleasant working conditions in larger firms.
6. Conclusion and policy considerations: In recent years, Bangladesh, a country characterised by the practice of extreme patriarchy, has exhibited an impressive rate of growth in the participation of women in employment, particularly manufacturing. However, the recent tragedy in Rana Plaza, together with some emerging academic evidence, suggests that women are generally employed in low-skilled and low-paid industries within the manufacturing sector. This study sheds light on the demand-side determinants of a greater female employment in such industries, which existing studies have largely neglected.
|
The learning conference
|
[
"Conferences",
"Knowledge sharing",
"Learning",
"Learning methods"
] |
Summarize the following paper into structured abstract.
Why do we pay to get lectured at?: Every so often, business managers, administrators, knowledge workers and professionals convene for events that are dedicated to learning and knowledge sharing but which actually produce very little learning. The "professional conference", as we shall call it, is the one- or two-day event called by government agencies, professional associations or independent conference organizers for the purpose of sharing information or knowledge about some topic of current interest to a community of professionals, be it business opportunities in South-East Asia, a narrative approach to corporate communications, the annual water industry conference or best practices in the food and beverage supply chain.Variously labeled summit, roundtable or forum, the professional conference is generally packed with PowerPoint presentations, with little time assigned for discussion, reflection or other audience participation. A common experience is that delegates listen patiently in the morning, but as the day wears on they seem increasingly bored, some sneak out prematurely and many leave the conference frustrated with having been kept passive for hours on end. Although educators generally agree that the presentation or lecture is a poor vehicle for learning, it is the mainstay of professional conferences everywhere.At scholarly conferences, where most delegates present papers, often in parallel sessions, the presentation serves several purposes: it is showcase of recent research and thus an important element of scientific communication, it pays the delegate's way (no presentation, no funding from one's home institution), and it is entered into the presenter's curriculum vitae and qualifies him or her for promotions and grants. These rationales, however, are absent at the professional conference where, typically, presenters are invited experts or practitioners who benefit little professionally from having made their presentation and hence often receive monetary compensation.In contradistinction to the recurrent sessions of management development programs, which often employ quite advanced instructional techniques beyond classroom teaching (coaching, peer one-on-ones, team work, reflective writing, experiential exercises, etc.), the one-shot professional conference is largely a relic of academic teaching practices in 19th century Germany: the all-powerful professor speaks to an auditorium of obedient students.The time is ripe for alternative types of professional conference. In the knowledge society, the managers and professionals attending conferences are often as well educated and experienced as the experts on the podium. Delegates are generally busy people engaged in important projects of their own, and they have just barely been able to free themselves from their interesting work to attend the conference. Chances are that they want opportunities to present their ongoing concerns and meet other people with like interests.This paper begins to address this need by introducing the idea of a "learning conference" that features many facilitated knowledge-sharing activities suitable for today's highly charged knowledge workers. The paper reports concepts and practical techniques developed as input to an explorative project titled "Future meeting concepts". The project was funded in part by the Danish Ministry of Economics and Business and in part by eight large conference venues in Denmark, most of whom derive a large part of their income from renting their facilities to conference organizers. The participating executives noted that since the introduction of the flip chart and the overhead projector in the 1970s, the meeting had seen no innovations. Hence, they formulated the need for a "learning meeting" that would involve meeting participants and not render them passive victims of PowerPoint overload.
Problem showcase: an HR directors' conference: Here is an example of a fairly standard professional conference, identified more or less at random by googling "conference". Organized by Economist Conferences (2007), the Seventh Human Resources Roundtable, "Global transformation and leadership" is a one-day event. The website lists some current challenges to the HR director and suggests there will be ample opportunity for interaction: "Discuss these and other key challenges with your peers from some of the world's most innovative and successful HR-driven organisations at our upcoming one-day roundtable in New York City."The event runs from 8.45 a.m. to 4.00 p.m., seven hours and fifteen minutes. There is an introduction and four 75-minute slots for presentations by eleven speakers, for a total of five hours. The remaining two hours and fifteen minutes are allocated to a break in the morning, luncheon and a break in the afternoon. Since each speaker has only about twenty-five minutes, there is unlikely to be much time for questions and discussion during the sessions. One hopes that the speakers make themselves available during the breaks, so as to enable the HR directors attending this conference to engage in the advertised discussion "with your peers from the world's most innovative and successful HR-driven organisations".The conference boasts the term "roundtable" in its title, suggesting a lively debate between a dozen participants seated around the same table, but it is a gimmick. This, too, will be a conference of passive listening, and it will probably see its share of exhausted delegates sneaking out mid-afternoon, unable to stand any more lecturing.
Research on the professional conference: Searches in the scholarly literature have failed to turn up studies on the professional conference as a forum for learning. Knowledge sharing at special types of conference, like search conferences (Emery and Purser, 1996) and consensus conferences (Andersen and Jaeger, 1999), is indeed discussed, as are several types of smaller meeting labeled conferences: medical case conferences, family group conferences, press conferences, electronic conferences, etc. More generally, alternatives to the classroom lecture as a vehicle of learning have been proposed for many years (Illeris, 2004), and PowerPoint presentations have been critiqued as well (Tufte, 2003). However, the literature is curiously silent on the learning opportunities wasted when six or twelve presentations are bundled into a conference for managers or other professionals to attend.
Assumptions about knowledge and learning at conferences: When the organizers at Economist Conferences put their Seventh HR Roundtable together, what were their thoughts on learning and knowledge sharing? Well, part of the problem is that they probably didn't give it much thought, "for this is simply how conferences are, you know". However, the program is, of course, indicative of several assumptions:* Knowledge is held by the invited experts. Delegates have virtually nothing to contribute.* People attend to receive the experts' knowledge or enjoy their wit.* Experts best communicate their knowledge by speaking it and showing PowerPoint presentations.* When people seated on chairs hear the words and see the slides, they receive this knowledge.* Delegates will pick up the knowledge better if they are given just a little time for questions and discussion.Presented as starkly as this, these assumptions are unlikely to find backing in any quarters, even amongst conference organizers (their typical counterargument is commercial, not based on learning theory: "Well, if we don't have prominent speakers up there, people are not going to show up. Having many speakers means that potential delegates are more likely to find one they'll want to hear".)The learning theory implied by the traditional conference is the transfer model. Like the empiricist position, it assumes that people are blank slates on which the senses may write data about the world (Pinker, 2002). Minds are empty containers that are slowly filled during life, by parents, teachers etc. Traditional schooling presupposes that knowledge held by teachers may be successfully transferred to students if they simply tell the students what they know (Illich, 1971; Freire, 1972). The information theory of Shannon and Weaver (1949) reinforced these ideas by pointing out that information may be transferred from a sender to a receiver through a channel, and communication is effective when the message received is identical to the message sent.However, when teachers or professors are frustrated in their attempts to squeeze their valuable knowledge into their student's minds it is obviously because these minds are already filled with a thousand conversations, biological, emotional and intellectual. Any one of us harbors worlds of prior understanding and prejudice that filter and interpret incoming information to suit the intentions, inclinations and projects that constantly fill our minds and lives (Gadamer, 1975; Maturana and Varela, 1987). By nature, people are not passive receptacles; they are actively engaged in shaping their lives and trying to realize their potentials (Aristotle, 1962; Maslow, 1968; Snyder and Lopez, 2002).For teaching to be effective and learning to take place, educators must realize that students are always actively engaged in constructing their worlds (Piaget, 1926). Learners are geared to knowledge that produces practical results in their actions (Dewey, 1915). People learn in a holistic process that integrates their total sum of experience (Kolb, 1984), they learn best at the point where each individual is just about to go (Vygotsky, 1978), they learn from engaging socially with other people in real-life situations (Wenger, 1998) and so on and so forth. However diverse, textbooks on educational theory and practice will paint pictures of learning and knowledge creation that are pretty much the exact opposite of the professional conference.
A forum for human co-flourishing: Future scholars of the professional conference will argue about which learning theory better undergirds the successful conference, and there will be many kinds of conference based on different learning theories. A first step, however, is simply to expect conferences to be founded on any modern kind of learning theory at all, that is, anything postdating the medieval - and still widely held - belief in the lecture as the medium of choice for knowledge transmission. For such a first step, let us collect a few strands of a modern and development-oriented view of human minds, knowledge and learning.What kind of people are the HR directors headed for the Roundtable presented above? Well, they are likely to be self-motivated and full of energy, and they probably have extensive life and work experience. They may be steeped in exciting projects and have lots of things they want to do. They are going to the conference looking for new and challenging input they can use in their work and they hope to connect with smart people with similar interests and share their concerns and thoughts with them. They want to be inspired, to talk and be listened to and to have fun with the other delegates.This is an early 21st-century rendition of the Aristotelian notion of human flourishing, adapted to the manager or the professional in the knowledge economy. Life is about finding one's telos (purpose) and unfolding the human potential, becoming what one is, as the Roman emperor-philosopher Marcus Aurelius put it.What is learning in such a view? Well, it is not what we usually think it is: a quantitative increase in knowledge, the storing of information, the acquisition of facts or skills, the making of meaning, or the reinterpretation of the world (Ramsden, 1992, p. 26). In a humanistic view, learning becomes indistinguishable from knowledge creation (Stacey, 2001, 2003) and thus becomes a key element of human development (UNDP, 2003). In this wider view, to learn is to expand the domain of capabilities, thus ensuring the progressive unfolding of the human potential, the flourishing of humankind.What is human flourishing in the very concrete context of the professional conference? Well, if I am going to a conference I want it to be relevant to my current concerns, I want it to help me use my resources and unleash my powers. I want to be inspired and empowered in such a way that I'll be more successful at doing what I already want to do within the area defined by the conference topic. Or, if I am sufficiently inspired by the conference, I will twist my current projects to accommodate the new aspects that so inspired me. In the rare case, I will even take up new projects or concerns that arise out of the social and intellectual interactions at the conference.So the key is inspiration, the enlightening experience that what I've just heard or realized or discussed with other people is new and exciting and will help me do what I want to do. Such sparks of inspiration may fly several times for me during a good conference; an excellent conference has sparks igniting again and again and the sublime event is one prolonged fireworks of enthusiastic inspiration between all delegates - the sort of meeting one may experience once in a lifetime or read about in the literature. In the domain of learning, this ideal for human interaction may be termed co-flourishing: when human potentials unfold and blossom in interaction with each other; when people have so inspired each other for individual or joint reflection or action that they become more fully what they are.
Design principles for the learning conference: If we posit the sort of ideal sketched in the previous paragraphs as the basis of the learning conference, what design principles may be derived for use by conference organizers? We propose the following principles:1. Expert input is fine, but it must be concise and provocative
Learning techniques: These design principles may be brought to life in the learning conference through various process techniques.Individual reflection
Potentials for change and research: It remains a challenge for a concerted research-and-development effort to ascertain exactly which concepts, designs and techniques produce desirable outcomes. Consultants, change agents and activists have always experimented informally with ways of meeting, interacting and knowledge sharing in groups, but the professional conference as an institution has remained surprisingly resilient - maybe because PowerPoint technology gave it a new lease on life ten years ago. It seems that every seasoned conference delegate is quite familiar with its shortcomings, yet the pattern is repeated every year.Those who wish to create conferences that help delegates learn can draw on the many extant techniques, activities and learning tools that are used by process consultants, mediators, family therapists, network agents and training and development specialists. There is a world of intelligent process facilitation waiting to be applied in the professional conference, if only more conference organizers and meeting planners would (dare) join the current revolution in learning and knowledge processes that is taking hold in business as well as academia these years.
|
A hazard analysis methodology for the South African abattoir hygiene management system
|
[
"CCP",
"Control point",
"Hazards analysis",
"HACCP",
"Hygiene management system",
"Meat safety"
] |
Summarize the following paper into structured abstract.
1. The South African hygiene management system (HMS): The adoption of Hazard Analysis Critical Control Point (HACCP) systems to manage food safety hazards in the food sector has been widely reported. In the South African Meat Industry, the HMS is used at abattoirs to manage the safe handling and processing of meat. The HMS is a legal requirement as per Section 11(1)(e) of the Meat Safety Act, Act 40 of 2000 (South Africa (SA), 2000).
2. Hazard analysis, HACCP and the HMS: The SANS 10330:2007 (HACCP) standard was published by the International Organisation for Standardisation and adopted in South Africa.
3. Hazard analysis methodology: The Codex Alimentarius Commission (CAC) presents a two dimensional health risk assessment model as a method of assessing the significance of food safety hazards by considering the probability of occurrence and the severity of consequences of hazards (Codex Alimentarius Commission (CAC), 2003). Table I presents a suggested hazard assessment matrix.
4. Applying the proposed hazard analysis methodology in HACCP: 4.1 Flow diagram and hazard analysis
5. A legal basis for, and the necessity of, hazard analysis within the regulated HMS: 5.1 The need for hazard analysis
6. Steps to implement the HMS: The steps suggested in this section incorporate processes followed in this paper. They are suggested to identify CPs towards the development of a HMS. Pre-requisites must be considered before HMS implementation, e.g. people commitment, resource allocation and education and training of personnel. In total, 19 implementation steps are presented here:
7. Conclusion and further research: The necessity to conduct hazard analysis as part of the HMS development at South African abattoirs is implied in regulations. If hazard analysis is not done, then the alternative would be to establish critical limits as well as monitoring systems for each hazard listed. As this is impractical, the HACCP approach is suggested. In this approach, PRPs are implemented to offer general control in the prevention of hazards. Hazard analysis is done to identify significant hazards and using the CCP decision tree, CCPs are identified and managed as part of a HACCP plan. The HMS adopts HACCP steps that justify benchmarking of HACCP.
|
Estimating ICU bed capacity using discrete event simulation
|
[
"Intensive care unit",
"Hospital beds",
"Bed capacity",
"Service levels",
"Rejection rate",
"Discrete event simulation"
] |
Summarize the following paper into structured abstract.
1. Introduction: Intensive care units (ICU) in hospitals cater for critically ill patients who need immediate attention such as emergency cases and surgery recovery. Due to the critical patient conditions, the requests for the ICU beds have to be processed with no waiting time. Any delay could pose a significant threat to the patients' safety. Lack of the ICU beds may cause service level deteriorations including ambulance diversion, surgery cancellation and refusal of admission. On the other hand, excessive ICU beds may unnecessarily take up hospital budget, space and other valuable resources. Thus it is important for the healthcare service providers to determine the proper ICU bed capacity in order to strike the balance between service level and cost effectiveness. To make the situation more complex, the demand of the ICU beds is growing due to the aging and growing population, and the stochastic nature of emergency cases adds considerable fluctuation and variability to the demand of the ICU beds. All the above mentioned factors make the ICU beds planning a more challenging task.The whole ICU system can be described as a queuing model with no waiting area (Mulligan, 1985), where patient arrival is accepted or rejected immediately. A typical queuing model is composed of three factors: arrival rate, service time and the number of servers. In the context of the ICU system, emergency and elective cases are regarded as the arrival rate, the length of stay of each case is regarded as the service time, and the number of the ICU beds is regarded as the number of servers. Once the distribution of the arrival rate, the length of stay and the number of servers are fixed, queue analysis can then be applied to calculate the relevant information such as rejection rate (Gorunescu et al., 2002; Harper and Shahani, 2002; McManus et al., 2004).There are several problems of the above mentioned queuing model approach. First, the accuracy of queue analysis largely depends on how well variations (arrival rate and length of stay) in the ICU system are captured by the queuing model. However, it is difficult to match the variations with proper distributions. For instance, the arrival pattern of an ICU system is usually a mixture of emergency and elective cases. The emergency arrival varies according to time of the day, day of the week, and month of the year. The elective arrival is usually appointment based and can be considered as deterministic. It is difficult to find a proper distribution to match such a mixed arrival pattern. At the server's side, length of stay is highly skewed. Length of stay in the ICU is usually short because most patients in the ICU will be transferred to normal beds once stabilized. However, some complicated cases may stay in the ICU for a much longer period. Many distributions such as lognormal, gamma were applied to describe the distribution of length of stay, but the results did not match (Griffiths et al., 2006). Some research works proposed queuing models with customized distribution. (Griffiths et al., 2006) proposed a queuing model with negative exponentially distributed inter-arrivals and multiple service channels each having hyper-exponentially distributed service time. (Houdenhoven et al., 2007) proposed a prediction model to estimate the length of stay for the queuing model.Another problem of a queuing model is the difficulties to describe complex workflow. For instance, the ICU studied in this paper takes patients from two sources: emergency cases and elective surgeries, and the service providers are interested in performance indicators such as the number of cancelled surgeries. A typical queuing model may not meet such requirement.Considering the limitations of a queuing model mentioned above, discrete event simulation (DES) is applied in this paper to model the patient flow of the ICU system. DES has been widely applied in many hospital sections including outpatient clinic (Harper and Gamlin, 2003; Klassen and Rohleder, 1996; Zhu et al., 2009), emergency department (Connelly and Bair, 2004; Su and Shih, 2003; White et al., 1992) etc. Simulation models of the ICU system are also introduced or developed in many research works (Cahill and Render, 1999; Costa et al., 2003; Kim et al., 1999; Ridge et al., 1998). (Costa et al., 2003) proposed a mathematical model to estimate the number of beds required using raw data. The mathematical model also provided the number of emergency patients transferred and the deferral rate of elective patients. (Kim et al., 1999) applied queuing theory and computer simulation to estimate the proper utilization of the ICU beds, which took the trade-off between patient safety and cost effectiveness. (Ridge et al., 1998) studied the relationship between the number of beds and rejection rate of the ICU requests due to lack of bed. Simulations showed a non-linear relationship between the two terms. (Cahill and Render, 1999) constructed a simulation model to estimate the downstream and upstream resources needed by a fixed number of the ICU beds to avoid rejection or diversion of the ICU requests. (Krekea et al., 2004) compared various simulation techniques (mathematical modeling, Markov modeling, Monte Carlo simulation and DES) and their applications on the ICU system. Conclusion was drawn that DES provides more flexibility to describe the complexity of the ICU system than the other techniques.The main contribution of this paper is to develop a generic approach, which estimates the ICU bed capacity using DES. Such an approach is composed of three steps: first, a DES model is constructed and validated to reflect the workflow of the ICU. Second, the bed capacity needed is estimated by the ICU by collecting the transaction data from ground and feeding into the model. Thirdly, different what-if scenarios the healthcare service provided are interested in are tested using the DES model and the results are used as references for decision making.
2. Method: Figure 1 illustrates the overall patient flow of the surgical ICU department of a Singapore government hospital studied in this paper. There are two sources of inflow: one source is the critically injured or ill patient from the emergency department, the other source comes from operating theatres. Part of the booked surgeries needs post-operative stay in the ICU. Both sources need immediate admission to the ICU once the requests are raised. A slight delay may affect patient safety severely and lead to irreversible consequence. Hence no waiting for the ICU beds or temporary buffer is allowed. If all the ICU beds are fully occupied, new emergency arrivals are overflowed to the ICU beds available in other departments or diverted to other hospitals. On the elective side, surgeries which need post-operative ICU stay have to be cancelled and rescheduled to a new appointment date and time. Both overflow and cancellation are considered as rejection. Length of stay in the ICU depends on patient's condition. There are four sources of outflow: discharge, transfer to high dependent bed, transfer to normal ward and death. The service providers are interested in the following five performance indicators: patient day, bed occupation rate, the number of overflowed cases, the number of cancelled cases and rejection rate. Patient day is defined as the sum of length of stay of all admissions. Bed occupation rate is defined as the percentage of patient day over the total days provided by the ICU beds. Patient day and bed occupation rate describe the utilization of the ICU beds. The number of overflowed and cancelled cases is self-explanatory. Rejection rate is defined as the percentage of the rejected cases (overflow + cancellation) over the total arrivals.Figure 2 illustrates the DES model based on the overall patient flow of the surgical ICU. Simul 8(r) 2008 professional is applied to construct the model. There are three flows in the DES model: The first flow is Emergency/Elective - Surgical ICU - Discharge/Transfer out/Death. This is the main flow when there are available ICU beds for emergency/elective arrivals. The second flow is Emergency - Surgical ICU - Other ICU. The flow occurs when no ICU bed is available when a new emergency case arrives. The unmet demand is overflowed to, if possible, other ICU or diverted to other hospitals. The third flow is Elective - Surgical ICU - Cancellation. The flow occurs when no ICU bed is available when a new elective case arrives. The elective case has to be cancelled and rescheduled for another slot in the waiting list.In this paper, no distributions are assumed for both emergency and elective arrivals. Instead, actual operational data is fed into the simulation model directly. At the emergency side, the arrival time of patients who need immediate ICU admission is considered as the input. At the elective side, the appointment time of the elective patients to the ICU beds is regarded as the input. Length of stay of each individual patient is extracted from the actual operational data and paired with the arrival time or appointment time of the specific patient. The advantage of applying actual operational data directly is the possibility of reserving all the variations and seasonal factors existing in the ICU department.One problem of using actual operational data in the simulation model is lack of length of stay information for the rejected patients, that is, emergency patients who are overflowed to other ICUs and elective patients whose appointments are cancelled. In this paper, length of stay of the rejected patients is sampled from the distribution of length of stay of the actual operational data. Another problem of using actual operational data is that such data does not represent the future demand. In this paper, the future demand growth is simulated by inserting fictional patient arrivals into the actual operational data. Such arrival insertion is based on the following two rules: first, the demand growth is homogenously between emergency and elective cases, that is, the proportion between emergency and elective arrivals remains the same. Second, patient arrivals follow the same pattern as the original operational data. The variations of time of the day, day of the week and month of the year remain the same. The demand growth of each time period is in proportion to the original demand distribution. Length of stay of the inserted arrivals is sampled from the distribution of length of stay of the actual operational data.
3. Results and discussion: A two-step model validation procedure is applied in this paper to ensure that the simulation model accurately reflects the actual situation of the ICU department. The first step is open box validation. Patient flow of the model and parameter settings are presented to the service providers to confirm that the patient flow in the model is a valid mapping of the actual patient flow and the parameter settings are reasonable estimation of the actual practice. The second step is black box validation. Simulation results of the original scenario are measured by performance indicators and compared with the actual values from the operational data. Table I lists the results of black box validation, where the actual results are denoted by A and simulation results are denoted by B. The observation period is 12 months and there are 13 ICU beds currently in service. Both actual and simulation results are broken down by month. It is observed from Table I that obvious variations exist among different months. During the peak month, the ICU beds are more utilized with longer patient days and higher bed occupation rate. However, number of overflow and cancellation increases accordingly, which leads to a significantly higher rejection rate. It is also observed from Table I that the simulation model accurately captures the monthly variation of the actual situation. Simulation results of all five performance indicators are close to the actual results, which indicate that the simulation model is well calibrated. The simulation model can provide useful references during the evaluation of some what-if scenarios the service providers are interested in.One what-if scenario the service providers are interested in is to decide the proper number of the ICU beds to mitigate the monthly variation. For instance, one of the requirements is to maintain a no more than 5 percent rejection rate in all months. Table I indicates that the requirement is only met in three months (January, September and December) in the original scenario. In the other months more ICU beds are needed in the peak months to satisfy the requirement. Table II and Figure 3 illustrate the simulation results of the relationship between the rejection rate and the number of the ICU beds in service. Three types of the ICU bed capacity (14, 15, and 16 ICU beds in service) were simulated in this study. It is observed from both Table II and Figure 3 that the rejection rate decreases significantly when there are more ICU beds in service. It is also observed from Figure 3 that the 5 percent rejection rate requirement is met in 7, 9, 12 months when the number of the ICU beds in service is 14, 15, 16 respectively. If the number of the ICU beds in service is fixed for all months, 16 ICU beds are needed to ensure that the 5 percent rejection rate requirement is met in all months. Table II also lists performance indicators other than rejection rate. It is observed that the adding more ICU beds does not necessarily increase patient day in idle months (January, September and December). Considering the existence of month variation, the practice of a fixed large number of the ICU beds in all months (16 ICU beds in this study) may cause waste of resources in idle and normal months. A more costs effective approach is to match the number of the ICU beds in service with the actual monthly volume. According to Table II and Figure 3, the original 13 ICU beds are sufficient in January, September and December, and 14 ICU beds are needed in June, July, October and November, 15 beds are needed in February and March, 16 beds are needed in April, May and August respectively to meet the 5 percent rejection rate.Another scenario the service providers are interested in is the number of the ICU beds in service needed to handle the future demand growth. In this paper, two growth rates were simulated: 5 percent and 10 percent. The growth rate is assumed to be homogenous between emergency and elective admissions. This assumption is based on the fact that the health care service providers in the ICU system are mostly interested in the overall demand growth and the source of the demand is relatively less significant. Table III illustrates the simulation results of the number of the ICU beds in service needed to meet the 5 percent rejection rate under the two suggested growth rate settings. It is observed that more ICU beds are needed to handle the growth. Averagely speaking, one extra ICU bed is needed when the growth rate is 5 percent, and two extra beds are needed when the growth rate is 10 percent.The above results show how the DES model applied to the surgical ICU department in a Singapore government hospital. Necessary adjustments are needed when the DES model is applied to other hospitals. For instance, the flow of the DES model needs to match the workflow of the ICU in other hospitals. The what-if scenarios may also vary to reflect the interests of the service providers in other hospitals.
4. Conclusion: This paper developed a DES model to simulate the complex patient flow in a surgical ICU department of a Singapore government hospital. The DES model takes two sources of inflow: emergency and elective cases. The ICU beds are assigned on first come first serve basis. Once all the ICU beds are occupied, the incoming emergency cases will be overflowed to other departments or diverted to other hospitals and the elective cases will be cancelled. Actual operational data was collected and fed into the DES model to capture the variations in the system accurately. The calibrated model is used to test the what-if scenarios the healthcare service providers are interested in. Two what-if scenarios were tested in this paper: the proper number of the ICU beds in service to meet the target rejection rate and the extra ICU beds in service needed to meet the demand growth. Results show that the proposed DES model accurately describes the actual situation and is flexible enough to test the different what-if scenarios.
|
The humanitarian imperative for education in disaster response
|
[
"Disaster",
"Education",
"Protection",
"Humanitarian",
"Response",
"Emergencies",
"CFS",
"EiE"
] |
Summarize the following paper into structured abstract.
Introduction: The growing number of humanitarian crises around the world that have multiple contexts and forms has caused renewed thinking about how we go about the business of disaster response and humanitarian aid. This is true not just for how we deliver aid, but also who we deliver it to, and what we deliver them.
Disasters and education: Disasters continue to have a major global impact on the lives of people, affecting millions (UNHCR, 2017). From 1994 to 2013, 218 million people were affected by natural hazards, annually. Geophysical disasters accounted for the highest number of deaths over the same 20 year period for total affected populations, at 750,000 (CRED, 2015). This has resulted in a complex field of emergency response.
Education as a global concern: Access to education as a global priority is a cornerstone of recent development agendas. The Millennium Development Goals (MDGs) were instrumental in placing education as a necessary objective, taking the view that there were no circumstances in which people could be denied basic rights to education. After mixed success, MDGs were replaced with Sustainable Development Goals (SDGs). Sachs (2015, p. 53) determined that, despite not achieving all that they could have, should have, or what we were told they would have, the MDGs had "made a real difference". For the SDGs, the goals of education and gender equality are considered complimentary to achieving educational objectives. Goal 4 - inclusive and equitable quality education, and Goal 5 - gender equality (UN, 2015, pp. 17-18), combine to deliver development goals that require nations to ensure that all boys and girls have access to free, inclusive, quality primary education. This is true also during crises, directly referring to the needs of the vulnerable, which are identified as including "persons with disabilities, indigenous peoples and children in vulnerable situations" such as complex humanitarian emergencies (UN, 2015, p. 17).
Varied benefits of EiE: In many ways, education owes its status as a response activity not to the act of formal education itself, but to the associated benefits that come along with it. EiE is inclusive, and encompasses all levels of education including for adults and vocational education, however the biggest drivers of funding and advocacy relate to children, highlighted by attempts to mainstream "universal primary education" into the discourse. Being one of the more visible, vulnerable groups, arguments regarding the benefits of education to children are often more persuasive. This includes the impact of EiE on child protection, as well as both mental and physical health, which will be discussed here.
Conclusion: The above analysis indicates the holistic merit of education in disaster response and that, even though the instruments of international law compel education to be considered from a rights-based perspective, the humanitarian imperative presents a more coherent framework as to why education should be given a greater role in humanitarian response. Yet, despite the growing coverage of the multiple benefits of EiE, education still receives only a limited share of humanitarian funding - approximately 2 per cent. The commitment to increase this funding to 4 per cent has not been met, and specific disasters generally receive less than 40 per cent of the resources requested for education. Meanwhile, the number of children in emergency situations unable to participate in education reaches into the millions, and continues to grow.
|
No we won't! Teachers' resistance to educational reform
|
[
"Israel",
"Teachers",
"Strikes",
"Internet",
"Educational reform",
"Resistance",
"Policymaking",
"Social media",
"Politics"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: Over the last few decades, national governments have frequently used policy-led reforms as a means of improving school system operation, learning processes and student outcomes (Gaziel, 2010; Hess and Kendrick, 2007; Spillane et al., 2002). These governments employed educational reform with the aim of a reconstruction of central components in the school system, such as the managerial system, organizational structure, financing processes, curriculum, pedagogy and human resources issues, so as to attain policy objectives (Gaziel, 2010). Such systematic restructuring often emerged as a result of a feeling shared by national policymakers and the public that the school system failed in its role, mainly with regard to academic achievements or equal opportunities for all students (Barnett and Whitaker, 1996; Cuban, 1990). The subsequent reforms generally aimed to rapidly and dramatically change the system (Fullan and Miles, 1992). Far-reaching reforms often carry broad implications for teachers (Campbell, 1996); therefore, it is not surprising that they have often evoked resistance.
Political processes and teachers' resistance to reform: Policy decisions which deal with distribution, re-distribution and regulation (Lowi, 1964) can often result in power struggles, especially when parties have different expectations and interests which may conflict in certain circumstances (Lasswell and Kaplan, 1950). Most of the time there is a substantial difficulty in separating policymaking from the attendant politics (Dale, 1983). That is why, one of the key junctions at which power struggles occur is during policy discussion and decision making (Elamin, 2007), in which a variety of policy actors, such as politicians, business leaders, citizens' groups and private individuals (Najam, 1999) debate the suggested reform. In some cases, the very choice of the issue under discussion or the way in which it is presented can lead to conflict (Dutton and Penner, 1993), since a proposed organizational solution or a new policy agenda can be perceived as favoring the interests of specific individuals or groups (Pettigrew, 1977). Therefore, political processes and dimensions of power, such as influence, values, ideology, and patterns of cooperation and conflict are relevant to understanding educational policymaking and implementation processes (Bjork and Blase, 2009).The terms micro- and macro-politics are associated with the analysis of political processes in the education and school system. While micro-politics refers to the mechanisms of power which individuals and informal groups employ within schools to achieve their goals; macro-politics refers to the power affecting educational decision-making processes at a regional or national level (Bjork and Blase, 2009; Kelchtermans, 2007; Weiler, 1994).So far, research has focused on those lateral influence processes (i.e. uni-level influence processes: either at the school level, or at the regional/national level) and neglected cross-level influence attempts. Such attempts may occur, for example, when specific teachers resist state policy and attempt to mobilize teachers in other schools, or agitate public opinion to change government policy, or when the state attempts to silence such a teacher. Therefore, a conceptual framework of political processes in education is presented to help categorize these circumstances (Table I). The model presents two axes: the arena (i.e. inter-organizational versus intra-organizational) and the policy actors (i.e. individuals and informal groups versus organizations and institutions).Teachers' resistance to reform has so far been researched mainly at the school level focusing on the micro-political aspects of the resistance (Ball, 1987, 1994; Blase, 1997) and recommending to principals and policymakers how to address them (Hess, 1998; Zimmerman, 2006). However, cross-level teachers' resistance in the context of wider public debate has been ignored. Such resistance represents a bottom-up politics and involves the use of non-formal mechanisms to influence decisions and policies from below (Jaeger, 2007). In a democratic setting, the political goal in such a power conflict is to gain public legitimacy (Pettigrew, 1985). It is widely agreed that one of the central arenas in which power and legitimacy are gained in today's world is the media (Curran, 2002). Effectively using the media enables the harnessing and consolidation of influence in macro-politics processes (Green-Pedersen and Stubager, 2008).
Political communication and policy agenda setting: There is a growing recognition that politics is becoming more and more communicated and "mediatized" (Mazzoleni and Schulz, 1999). Political communication is defined as purposeful discourse about resource allocation, authority and sanctions, aimed at achieving specific goals (Denton and Woodward, 1990). The political use of the media aims to communicate views, solutions and interpretations of issues, in order to mobilize civic support (Froehlich and Rudiger, 2006).Political communication has a great significance in shaping the public agenda and interpretation (Scheufele, 2000). Researchers have found that the media has an essential role in defining the important issues on the agenda prioritizing them, and framing their interpretation (McCombs and Shaw, 1972). This interpretation is highly significant because, if widely adopted, it can motivate political action, mobilizing individuals personally (Scheufele, 2000) and initiating collective pressure on policy conflicts (Dery, 2000). Therefore, policy actors try to influence agenda setting (Kingdon, 1984). Policy actors' attempt to promote a specific problem definition and thus to frame the circumstances and draw attention to certain aspects of the situation, and by doing so, to advance specific solutions (Weiss, 1989).When this effort assumes an organized form, it is called a political campaign (Trent and Friedenberg, 2008). The emphasis on using the media to convey messages designed for the public is doing so in a persuasive manner (Mutz et al., 1996). Political messages are effective when they are simple to understand (Cobb and Kulinski, 1997) and may use reason or emotion to persuade the audience (Mio, 1996). Because people are inundated with information (Chaiken and Stangor, 1987), images in the media have great effect (Mio, 1996).
Web-based campaigns and policy agenda setting: The internet has a major impact on the public sphere (Dahlgren, 2005). For instance, Blumler and Kavanah (1999) claim that nowadays the accessibility and the variety of media platforms and technologies have changed the way people receive political information. This reality enables citizens to participate in public debates, unlike in the past when politics was discussed only by politicians, journalists, commentators, experts and leaders of interest groups (Hallett, 2005). The diversification of participants is also reflected by small communities and the proliferation of numerous other alternative voices (Kahn and Kellner, 2004).Over the past few years, researchers have noted an increase in the use of political web-based campaign (Sundar et al., 2003). The new age of media and its platforms are reducing dependence on third parties, allowing individuals to reach the public directly without intermediaries (Hallett, 2005). These technologies create an arena for activism, often named by the moniker cyber-activism (Illia, 2003).One prominent internet platform that is increasingly used in shaping public opinion is the blog (MacDougall, 2005). A blog is an online journal, in which written posts are published by a blog editor called a blogger. Entries can be written by the editor of the blog or in the name of other writers, while the blog editor directs the posts and discourse it contains. Compared to mainstream media, blogs receive relatively little attention, though there is wide consensus that blogs now play an important role in influencing the public debate in the mainstream media, political processes and policy processes (Drezner and Farrell, 2008). Bloggers are therefore often referred to as opinion leaders (Kavanaugh et al., 2006).Political blogs contain opinions and commentary on political issues. Regarding their role in public discourse, blogs have been described as the "lens focusing attention on an issue until it catches fire" (Grossman and Hamilton, 2004, p. 3). In addition, blogs allow for rapid publication covering events in real time (MacDougall, 2005). The weblog platform allows individuals and groups rather than professional journalists to express their opinions, resulting in turn in a challenge to the institutionalized structure of media, in the form of a more egalitarian and less hierarchical field for public debate (Pickard, 2008).Another prominent internet platform is partisan web sites. These sites aim to convey the messages of a candidate or an interest group, although sometimes they argue that their aim is merely to communicate information (Gibson et al., 2003).This case study investigates the rhetoric and images used in web-based campaigns by teachers to secure public support for their resistance to the "New Horizon" reform in Israel 2007 teachers' strike. During the time of the strike, many teachers independently established and maintained blogs. In some schools, the teaching staff transformed the school web site into a partisan site to express their opinions and win support from parents and pupils.
"New Horizon" reform in Israel: In 2001, the collective wage agreement signed between the Israeli Government and the teachers unions had expired. Israel has two teachers unions: "The Teachers Union", uniting most of the secondary school teachers and including about 40,000 members; as well as "The Teachers' Association", comprising kindergarten teachers, primary school teachers and small segment of junior high school teachers, including about 80,000 members in total. Between 2001 and 2005, both teachers' unions negotiated with the Ministry of Finance and Ministry of Education to reach a new collective wage agreement, but without success. The ministries also sought to attach a system-wide reform to the new wage agreement.In 2006, after the government had declared its intention of bringing about educational reform, the teachers' unions parted ways. "The Teachers' Association" signed a wage agreement with the government which in principal accepted the reform. That reform, known today as "New Horizon," was intended to extend school day, mainly by adding teaching hours for small-group tutoring. In return, it was declared that the formula used to derive teachers' salaries would be modified so as to lead to salary increases. "The Teachers Union" refused to accept the reform, and its leaders and members declared a work dispute (work to rule) with the aim of reshaping the reform in such a way as to avoid harming their employment conditions while addressing the problems of the system.The teachers opposed to the reform claimed that their work conditions would be worsened by increasing their work load while in effect reducing their hourly wage. They also claimed the government's desire to extend the teachers' work day was part of a long-term plan to lay off teachers and cutback the total number of teaching personnel. In addition, they argued that the reform did not address the major problems of the Israeli educational system, such as overcrowding in classrooms and the previous years' cumulative cuts in total teaching hours.Shortly after the start of the 2007-2008 school year, the "The Teachers Union" went on strike. From October 10 to December 12, 2007, 550,000 secondary school pupils stayed at home. The 64-day strike was the longest in the history of the Israeli educational system. During the strike, meetings were held between the parties, with various attempts at mediation, but no progress was achieved. The teachers demonstrated in the streets throughout the country. For the duration of the strike, the teachers enjoyed strong public support. For example, in a survey of "Channel 10 News" 69 percent of the Israeli public backed the teachers in their fight and only 10 percent though that the government demands were reasonable (Channel 10 News, 2007). The high point of the teachers' struggle was a mass demonstration, which included about 100,000 supporters. After two months, under pressure from the labor court and the threat of a back-to-work injunction, the parties held intensive negotiations leading to a new wage agreement partly contingent on postponing discussions on reform for the future. The Prime Minister personally committed to reinstituting the cut teaching hours, and promised to take action to reduce the large number of pupils in classrooms.
Method: The study is based on a qualitative research paradigm (Denzin and Lincoln, 2000), which strives to find new understandings of processes in their natural environment (Bogdan and Biklen, 1992). The empirical method used in this paper is a descriptive documentary case study (Gerring, 2004). The purpose of this paper is to illustrate the agenda setting strategy of the virtual teachers' communities which resisted the "New Horizon" reform during the 2007 Israeli strike. Mining the political activity of such a community can provide readers with a close look at teachers' bottom-up resistance rhetoric and political influence strategy. This type of methodology is suited to describing teachers' attitudes and behaviors (Denton et al., 2003; Miller et al., 2005; Smith, 2004).In order to identify the salient characteristics of the phenomenon, a multiple cases design was selected (Stake, 1995). In a multiple cases design, the selection of cases is guided by replication logic, because a generalization of results is applied to the theory (Yin, 1994). The analysis reveals repeated patterns. Multiple cases, which document activity over a period of time, can facilitate a better understand of political and policymaking processes (Gerring, 2004).Site selection
Findings: The data analysis revealed a number of themes relating to the use of media in a political campaign as part of opposition to reform and the design of the messages conveyed. Content analysis of the text entries resulted in the classification of the data into several categories. Five main categories arising from data analysis (Table III).The findings in this chapter will be presented in the order detailed above. In the following sections, each category is described and representative texts examples are presented.1. The "media front"
Discussion and conclusions: This study describes the bottom-up political strategy of teachers, as expressed in the political rhetoric communicated as part of their resistance to educational reform. The findings present the way issues were adapted and presented in the narratives and messages of opinion leaders (Nisbet and Kotcher, 2009). The study directs attention toward the employment of the media in educational policy debates in the present age. Results suggest that teachers' resistance included bottom-up politics to influence policy decisions. Furthermore, they suggest that teachers' rhetoric, as it emerges from blogs posts and school sites manifestos, contains well-formulated political messages aiming to garner public legitimacy. This legitimacy is the ultimate goal of every democratic power struggle (Pettigrew, 1985).Six conclusions were derived from the application of the study. One, findings indicate the perceived centrality of the media in general, and the internet in specific, by teachers resisting reform. The media arena was viewed as one of the major, if not the leading arena, in which policy decisions are debated and political influence won. Therefore, many of the attempts to influence political events from the bottom-up were aimed at changing agenda setting through those platforms. Findings portray a technique which utilizes indirect pressure on policymakers, addressing other more accessible actors, to more effectively influence events.Two, the rhetorical techniques employed in messages resisting reform were similar to those used in political campaigns. One possible explanation for this is that we live in a highly media-centric world, and that in this environment isomorphism and imitation in public debate occur. The findings show that the arguments opposing the "New Horizon" reform are similar to those identified in previous studies as justifying opposition to other political policies. For example, the writers combined emotional arguments with rational arguments and tried to present the opposing side as untrustworthy. These rhetorical techniques resemble those found in campaign blogs of US presentational candidates in 2004 (Trammell, 2006).Three, teachers attempted to present themselves to the public as "champions of education". Similar displays of communicative behavior trying to claim "issue ownership" have been reported in political studies (Green-Pedersen and Stubager, 2008). Furthermore, it has been found if one party addresses an issue more in the media, the party appears in the public mind as its "owner" (Walgrave et al., 2009). Such rhetorical arguments therefore serve the dual purpose of resisting reform while advancing an image of teachers as caring about the system and possessing special expertise.Four, employment of dramatic elements to mobilize support for the struggle. For example, the writers described the dominant figures in the government as "fools" and "villains". This sort of labeling tries to reduce the persuasiveness of the reform leaders' messages by presenting them as unreliable and driven by ulterior motives. The credibility of the communicator is considered a major asset in gaining political support (Mio, 1996). This finding supports previous findings on the use of drama in politics (Borreca, 1993), such as labeling dominant figures in public conflicts "heroes", "villains" and "fools" (Klapp, 1964).Five, underlying metaphors emerged from the arguments. These latent images presented the reform leaders as cold and detached in their "ivory tower" and the teachers as emotional and passionate in the "trenches" of the educational system. Such images are consistent with the claim that power struggles involve several levels of symbolic meanings, in which the stated attribute often represents a more abstract and implicit depiction (Gusfield and Michalowicz, 1984). The symbolic images of educational policymakers as acting from their "ivory tower" and teachers as operating in the "trenches" has been mentioned before in the context of educational system conflicts (Schlechty and Joslin, 1986). Effective symbolism is at the heart of successful media employment and public option formation (Mio, 1996), because it simplifies complex issues and makes them accessible to the general public (Thompson, 1996).
Limitations and implications: The current study presents an innovative theoretical framework in two ways. First, it develops a new conceptual model of political processes in education. Such a model can serve researchers as a cornerstone for mapping political processes inside and outside of school walls. Second, the study describes teachers' agenda setting strategy employing bottom-up attempts to exercise influence. Thus, the paper elaborates the operational proceedings of the theoretical definition suggested in the model with regard to the way in which individuals and informal groups can influence national educational policy.In addition, this study also possesses theoretical significance for educational research because it shifts attention towards the political employment of the media for influencing policy processes in education, a subject which has so far been greatly neglected. The present study focuses on an overlooked aspect of the resistance to educational reform, namely the political messages and rhetoric employed by the opposition.Nevertheless, study possesses several limitations. The collection of online documents was conducted nearly two years after the strike ended. This might have caused the loss of important data regarding teachers' agenda setting strategy. Also, combat-ready opposition is much more motivated and active, possibly marginalizing less aggressive voices on the web (Rainie et al., 2003). Real time data collection of future political events might overcome these shortcomings. Furthermore, because this research is exploratory and preliminary there is need for additional study to examine teachers' bottom-up politics and the employment of media as a political tool in educational policy conflicts. It would be worth examining the relationship between the media messages resisting educational reform and their effectiveness in influencing political opinion and encouraging real world activism. Another important research topic would be to compare the chronological dynamic of messages published by educational reform leaders and their opponents.The research also poses several practical implications for policymakers. The rise of the media in the public arena makes it impossible to ignore when initiating and implementing policy (Borreca, 1993). The use of media can enable teachers to influence macro-political processes from the bottom-up. While, the effects of uni-level politics (micro or macro) are documented (Ball, 1987, 1994; Bjork and Blase, 2009; Blase, 1997; Kelchtermans, 2007; Weiler, 1994), little is known about teachers' bottom-up politics, which may have serious and lasting repercussions on policy actors' morale, public legitimacy and their relationships with others. Therefore, the study's findings might encourage policymakers to promote shared policy formulation with teachers and invest more resources in preparing the ground for change.Study findings may be applicable to other contexts. Countries with a similar geographic and government structure to Israel (i.e. small and centralized nations, Inbar, 1986), are more likely to serve as fertile ground for bottom-up politics. Moreover, the findings may be relevant to large decentralized countries in which national control has been replaced by regional governments (Hanson, 1998).
|
Understanding "disengagement from knowledge sharing": engagement theory versus adaptive cost theory
|
[
"Knowledge sharing",
"Adaptive cost theory",
"Disengagement",
"Engagement theory",
"Knowledge hoarding"
] |
Summarize the following paper into structured abstract.
1. Introduction: Knowledge sharing is critical for organizational success (Alavi and Leidner, 2001; Birkinshaw and Sheehan, 2002), yet there have been a variety of reasons why employees fail to share their knowledge. For example, employees' desire to protect their knowledge or an employee responding to organizational expectations to protect their knowledge are two reasons why employees hide or hoard their knowledge (Webster et al., 2008; Connelly et al., 2012; Ford, 2008). However, in more cases, knowledge sharing fails to occur even when employees feel no need to protect their knowledge (Ford, 2008).
2. Literature review: 2.1 Disengagement from knowledge sharing
3. Methodology: 3.1 Participants and procedure
4. Results: The data analysis for this study proceeded as follows. First, we assessed the psychometric properties of each construct. Second, the relationships between the independent variables (job engagement, meaningfulness, safety, availability) and disengagement from knowledge sharing were examined through a series of structural equation models. Our research objective was to test hypotheses and compare two competing models (Figure 1). H1-H4 were tested by examining the significance of the paths between the constructs. We further examined the relationship between independent variables and disengagement from knowledge sharing by using a nested-models approach (Anderson and Gerbing, 1988) and report the parameter values, chi-square, Bentler's comparative fit index (CFI), the Tucker Lewis Index (TLI or NNFI) and the standardized root mean square residual (SRMR) for the series of structural models.
5. Discussion: This paper sought to address the question of why people do not share their knowledge with others at work; why do they become disengaged from knowledge sharing. First, it should be noted to what extent disengagement from knowledge sharing occurred within this sample. On average, participants reported being disengaged from knowledge sharing one to two times in the previous four weeks. Of all the participants, 74.6 per cent reported having been disengaged from knowledge sharing at least once in the past month, of which 7 per cent reported being disengaged from knowledge sharing at least once a week. In comparison to the other "lack of sharing" behaviors, like knowledge hoarding where the individual actively protects the knowledge and does not share it, this appears to be more frequent as other research has identified knowledge hiding (i.e. protecting requested knowledge) and hoarding to be low-base rate behaviors (Connelly et al., 2012; Zweig and Trougakos, 2008). Thus, it is relevant to examine the causes of disengagement from knowledge sharing as opposed to confounding this behavior with knowledge hoarding or hiding.
6. Conclusion: Currently, there is an assumption within the knowledge management literature, that when individuals do not share their knowledge, they are intentionally withholding and protecting it. However, this study illustrates that it is more likely that individuals are simply disengaging from the sharing process (i.e. not communicating their knowledge but also not protecting it). Furthermore, the results from the study presented indicate that Adaptive Cost Theory, infused with Engagement Theory, explains the lack of sharing within organizations and that disengagement from knowledge sharing is likely a common problem instead of the malevolent, opportunistic behavior of knowledge hoarding. The key predictors of disengagement from knowledge sharing are availability (i.e. health), job engagement (as a demand) and meaningfulness (which becomes fully mediated by job engagement).
|
Antecedents of innovation in industry: The impact of work environment factors on creative performance
|
[
"Work environment"
] |
Summarize the following paper into structured abstract.
Introduction: Ongoing uncertainty in the modern business environment requires managers to strive to find suitable alternatives for a business to survive and develop. Creativity and innovation are increasingly important in relation to developing skills in organizations, underscoring the critical role that fostering creativity plays in management for effective innovative performance.
Review of the literature: This section presents a review of the research on which this study is based and is divided into three topics: determinants of creative performance in companies; the componential theory of creativity; and the KEYS research instrument. The aim of this review is first to present various studies, including the most recent, which establish a connection between practices and behaviors that promote creativity in the business environment at both the individual and working group levels. Next, the theory on which this study is based is presented in greater detail, followed by a description of the KEYS instrument. The componential theory of creativity underlines the KEYS instrument.
Method: To answer the questions posed by the study, an exploratory study was undertaken, which nevertheless included a test of the hypothesis. The research consisted of a field study designed to explore individuals' perceptions about their work environment at a particular moment (Amabile et al., 2010). In light of the limited information about organizations that actually foster creativity in Brazil, the sample was chosen for heterogeneity and is based on openness to participation in the study.
Results: In light of the goal of assessing the impact of various measures of the work environment on creativity, a multiple regression analysis was used to estimate a model. The values for the work environment dimensions were examined and the behavior of the dependent variable creativity was observed and compared using the regression formula.
Discussion: The theoretical foundations presented and the results obtained show that companies should be organized in a manner that strengthens the dimensions that foster a creative social work environment (Amabile et al., 1996; Ensor et al., 2001). This study is based on the guidelines of the componential theory of creativity and the KEYS instrument, and the use of this instrument along with accepted premises and solidity contributed to the data obtained.
Conclusions: In this study, the factors that were shown to have a significant impact on the creative process in the sample were organizational encouragement, challenging work and work group support. The analyses performed made it possible to select these variables as having the greatest impact on creativity. Factors such as freedom, supervisor encouragement, sufficient resources, realistic workload pressure and absence of organizational impediments were not significant. The reasons for this counter-intuitive result are unknown, and only additional studies can shed light on the problem. The data collected do not allow one to draw conclusions about the reasons for the results, which is one of this study's limitations. The complete absence of Brazilian studies based on componential theory makes the search for an explanation even more difficult. It is possible, however, to speculate that the cultural factor certainly must have contributed to the results (Zhou and Su, 2010). A follow-up study with a new questionnaire aimed at further analysis of a qualitative nature focusing on the cultural aspect and differences between the USA and Brazil could shed new light on the results. An alternative explanation for the counter-intuitive result is that the sample may have had too little variety, so a confirmatory factorial analysis based on a large sample of Brazilian individuals and companies could find more broad-based support for the factorial structure proposed in Amabile's model.
|
Human resource management impact on knowledge management: Evidence from the Portuguese banking sector
|
[
"Service sector",
"Knowledge management",
"Training",
"Human resource management",
"Career development",
"Retention"
] |
Summarize the following paper into structured abstract.
1. Introduction: This paper focuses on questions of organizational knowledge, human resources and the dynamics of relations developed between them, within the dominant perspectives and assumptions in people management. The literature suggests that knowledge management (KM) is not alien to the orientations adopted in the management and application of processes related to people.
2. Theoretical background: In the current context, where change is the main factor affecting evolution at the most varied levels, the greatest challenge organizations face is in their capacity to create, improve and manage new knowledge as a valuable asset (Pinho et al., 2012). As stated by Akhavan et al. (2013), successful organizations will be the ones that are able to improve and develop their knowledge. This implies thinking of people as creators and holders of knowledge, with potential and competences that should be directed and collectively organized, besides reorienting management practices according to the demands of the emerging knowledge society. Therefore, people take on increasing importance (Ubeda-Garcia et al., 2013).
3. Empirical study: 3.1. Implementation
4. Discussion: Attempting to interpret the results relating to the sub-scales in analysis (CMPA, PTPA and RPA), we find that for career management, F1 - career management based on merit and development presents a positive predictive capacity for the KM scale as a whole and for each of its constituent factors. It stands out, however, that the predictive capacity shown is greater concerning Factors 1 and 3 of KM (knowledge-centred culture and formal KM practices). F2 - career management based on length of service and tenure only allows a forecast of F1 - knowledge-centred culture, this forecast being negative. Therefore, the more an organization adopts career management practices based on length of service and tenure, the lesser the presence of a culture oriented to knowledge. Indeed, whereas F1 - career management based on merit and development is a factor promoting and facilitating knowledge-centred culture, F2 is seen to inhibit this.
5. Conclusions, main contributions and limitations: 5.1. Conclusions
|
Role of virtues in the relationship between shame and tendency to plagiarise: Study in the context of higher education
|
[
"Emotions",
"Graduate education",
"Plagiarism",
"Virtues",
"Quasi-experiment"
] |
Summarize the following paper into structured abstract.
1. Introduction: Plagiarism is the unlawful use of another author's ideas or words and representing them as one's genuine work. A Roman, Martial, defined the term "plagarius", a Latin word, which means kidnapper. Martial claimed that another poet had "kidnapped his verses". According to Mallon (1989), "the Elizabethan playwright Ben Johnson was the first person to use the word 'plagiary' to mean literary theft, at the beginning of the 17th century".
2. Theoretical background: Most universities around the globe consider plagiarism, in the form of copying other's work and representing as one's own, as the most serious offence. This is increasing day by day because it is challenging to notice such acts. Even if some of the cases are caught, then there are no proper rules in some universities to take actions against them or to punish them. This, in return, gives an impetus to such kind of copying activities for gaining grades. Detection of plagiarism is time-consuming and requires a lot of efforts. Presently, there is an increase in plagiarism in varied types of academic texts. It is very difficult to measure the attitude of students towards plagiarism; researchers should use psychometrically tested instruments for the better understanding of plagiarism behaviour among students (Ehrich et al., 2015).
3. Measures of the study: In this study, the questionnaire is taken from the previous research done by Halimin Herjanto (2013) on digital piracy. The seven-point Likert-type scales are used with anchors ranging from 1=I do not feel anything at all to 7=I feel this very strongly for felt emotions (shame) and 1=I definitely would not to 7=I definitely would for individual virtues and outcome behaviour. It contains the questions related to the felt emotions (external and internal shame), individual virtues (influence, competitiveness and equality) and outcome behaviour (repair, discontinuance and avoidance behaviour). Along with scale items, respondents were also asked to provide basic demographic information related to their gender, age, programme enrolled and religion.
4. Data analyses: The partial least square (PLS) based structural equation modelling (SEM) is an advanced statistical technique that includes factor analysis and regression analysis simultaneously to examine the relationship between measurement indicators and constructs (Hair et al., 2017). SEM is commonly used in social science research to develop and test theories using survey data. SmartPLS software was used for testing of measurement model as well as for hypotheses testing (Ringle et al., 2015).
5. Results of hypotheses testing: H1. The feeling of shame is associated with the outcome behaviour (i.e. avoidance, discontinuance and repair).
6. Discussions and conclusions: In this study, a conceptual framework is developed to show the relationship between the emotion of shame and plagiarism-related outcome behaviours. The framework based on recent psychological research is tested on graduates. The results show strong support for the scenario-based effect of felt emotions, i.e., shame on outcome behaviour for plagiarism with moderating effect of specific individual virtues. We found that manipulated shame resulted in feelings of both internal and external shame. When individuals feel internal shame, they discontinue plagiarism. They also try to repair the damage that they cause by plagiarism. However, feeling of external shame encourages individuals to discontinue plagiarism but does not influence their avoidance and repair behaviour. Virtues such as influence, competitiveness and equality weaken the relationship between internal shame and plagiarism-related outcome behaviour. At the same time, these virtues do not affect the relationship between external shame and outcome behaviours. When individuals have high value for influence and competitiveness, they may be justifying the act of plagiarism even if they internally feel ashamed. Similarly, they may be justifying the act of plagiarism when they have high value for equality. This may be because, in a country like India, they may think that they are deprived of resources which are easily available to their counterparts in well-developed economies and therefore they may perceive the act of plagiarism as a means for achieving equality.
7. Implications: The findings of the study have significant implications for the management of scholarly work at educational institutions and universities. It is important for universities to understand the role of specific emotions in influencing individuals' plagiarism behaviour. Specifically, the study suggests that universities should provoke the emotion of shame to control plagiarism and encourage pro-social behaviour among scholars.
8. Limitations: In order to conduct the theoretically and empirically sound research, all efforts were made. Despite this, no study can be free from the limitations. Geographically, India is a varied and vast country; this study is limited to certain geographical regions. There is scope to analyse the plagiarism issue at the sub-national levels (Gaur et al., 2014). Another limitation in the study is the use of the narrative description of fictitious scenarios to generate the self-conscious emotions among scholars. In future studies, use of videos and real-life examples may enhance the degree of realism of manipulations and generate stronger emotions (Xie et al., 2015). Due to time and fund constraints, the sample size is small for such type of study. There are clear differences across different developing countries and between developing and developed countries (Judge et al., 2010; Singh and Gaur, 2009). For instance, the role of regulations and government intervention, as well as perceptions about what is ethical and unethical, differs significantly between India and China (Gaur et al., 2018; Li and Gaur, 2014; Pattnaik et al., 2018). A person who plagiarises is not just a consumer but also simultaneously a producer. However, our research considers only consumer's perspective. We have not included producer's perspective in our study. This is another limitation that future studies may like to address. In Middle Eastern countries, very less amount of significant research has been carried out in the area of education management. Many research papers have been conceptualised to provide solutions to the problems being faced by policymakers and educational institutions in the Middle East region (Singh, 2017), but empirical studies are scare. Our study can be replicated in the context of Middle Eastern countries to understand the plagiarism phenomenon.
|
A study into the reasons for process improvement project failures: results from a pilot survey
|
[
"Process improvement",
"Survey",
"Project management",
"Continuous improvement projects",
"Project failures",
"Six Sigma project failures"
] |
Summarize the following paper into structured abstract.
1. Introduction: The need for business process improvement (BPI) has become indispensable to overcoming contemporary challenges and achieving and sustaining competitive advantage (Antony and Gupta, 2019). Various structured continuous improvement (CI) approaches that were originally based on the philosophies and methods of Total Quality Management (TQM), developed into Lean Manufacturing, Six Sigma and Lean Six Sigma (LSS) and have become a core strategic element in organizations to improve internal and external process performance and to succeed in competitive markets (Adebanjo et al., 2016; McLean et al., 2017; Tewari et al., 2017). Today, BPI initiatives such as Lean and Six Sigma are an integral part of the overall business strategy for many organizations across a range of services and a range of industry sectors (Adebanjo et al., 2016).
2. Literature review: Studies on project failures are imperative as everyone, in general, is associated with various financial and non-financial losses which can impede the development of other potential projects. Additionally, learning from a project failure can play a key role in the planning and monitoring of future projects to ensure the long-term success of any organization implementing and sustaining CI initiatives such as Kaizen, Lean, Six Sigma or LSS.
3. Research methodology: Data were collected through an online survey which targeted Six Sigma practitioners and project leaders in various manufacturing organizations including Six Sigma BBs, MBBs, GBs and champions. An online survey method was selected, due to its low cost and the ability to deploy the questionnaire in a standardized way, using self-administered methods by the respondents (Couper and Miller, 2008). The questionnaire was piloted with five academics, who are experienced in Lean and Six Sigma topics and who have published extensively in peer-reviewed journals, as well as three Lean and Six Sigma practitioners who have significant experience with projects. The purpose of the piloting exercise was to identify those questions which needed improvement from a practical standpoint as well as ensuring that none of the constructs were omitted by the researchers (Forza, 2002).
4. Pilot survey key results: 4.1 Internal consistency analysis
5. Discussion: The reasons provided for CI projects failure are not well explored in the literature and are limited in terms of empirical studies (Arumugam et al., 2016). The CI approaches, such as Lean, Six Sigma and LSS are directly related to the execution of projects and success may be defined through successful project management (Laux et al., 2015). Therefore, this study aims to contribute to filling this literature gap, especially since there are direct managerial implications in practitioners' awareness of the reasons for process improvement project failures. We identify and describe ten factors that can influence the failure of process improvement projects based on the literature. These factors were then investigated empirically through an online pilot survey, completed by specialists who had experience in process improvement projects in manufacturing organizations.
6. Managerial/practical implications and limitations: This research has some important implications for managers, process improvement projects leaders and process improvement project members in all organizations. The findings suggest that failure in improvement projects cannot be ignored as the cost to organizations may include the loss of significant resources in terms of time and manpower. This implies that it is vital for senior managers and project leaders such as Six Sigma MBBs and BBs to identify and understand those factors which cause process improvement project failures so that remedial measures can be taken to mitigate against the impact of these failures. Moreover, CI or process improvement champions play an immense role in the project selection and completion of projects led by Six Sigma MBBs/BBs/GBs. The awareness of reasons for failures at the project level, team level and organizational level for such champions can be very beneficial to reduce the chances of failures which result in significant cash, time and energy loses by several individuals who play a role in project execution. This paper illustrates the main causes of failures which occur when process improvement project leaders execute projects at strategic and operational levels. The role of senior managers and process improvement project leaders play in reducing the chances of project failures are also highlighted.
7. Conclusions and agenda for further research: Improvement initiatives such as Six Sigma and LSS are increasingly widespread in manufacturing companies and other types of organizations, but there is still a high drop-out rate for companies, mainly due to the high costs and absence of results. Project failure occurs when there is no return on investment or time invested which can undermine interest and efforts required for any process improvement initiative. This paper reports on the results of a Six Sigma pilot survey carried out with Brazilian CI specialists from manufacturing companies. The results of the study showed that there is a moderate rate of project failures and that one of the most worrisome stages is control, the stage where projects are discontinued and results are not maintained. This could cost organizations several thousands of dollars or in some bigger multi-national corporations this could be several multi-million dollars. The paper also presents the factors and variables which identify the main reasons for failure. This study was carried out with some limitations such as the sample size (pilot study) and a focus on Brazilian companies and specialists with manufacturing experience. Due to limited sample size of the study, the authors could not perform any advanced statistical analyses (e.g. factor analysis). Moreover, the authors did not carry out a separate analysis with different sizes of the firms participated in the pilot survey to determine if the firm size has any impact on failure rate of process improvement projects. Moreover no analysis has been performed to understand if companies with high degree of maturity on Six Sigma implementation have less impact on project failure rates or not.
|
Dynamic benchmarking methodology for quality function deployment
|
[
"Benchmarking",
"Quality function deployment",
"Analytical hierarchy process",
"Competitors",
"SWOT analysis"
] |
Summarize the following paper into structured abstract.
1 Introduction: A competitive advantage, generally, can be gained if a company produces a product that not only addresses what the customer values most, but also performs better than its competitors in terms of quality, cost, and timeliness. However, these two factors, namely, the customer needs and competitors' performance, change over time, and yet most product-design processes seem to have oversimplified this fact. In the quality function deployment (QFD) literature, which is known as one of the most important tools in customer-driven product and service-development process (Bergman and Klefsjo, 2003; Xie et al., 2003; Raharjo et al., 2008; Miguel and Carnevalli, 2008), the former factor has been quite well addressed (Shen et al., 2001; Xie et al., 2003; Wu et al., 2005; Wu and Shieh, 2006; Raharjo et al., 2006). Unfortunately, there is too little attention paid to the latter, which is equally critical. To design or upgrade a product successfully using the QFD, it is not sufficient to only observe the change of customer requirements or attributes importance over time because during the product-creation process, competitive condition, especially, competitors' performance changes as well. Therefore, to improve the likelihood of success of a product design or upgrade process, both of these factors and their dynamics should be taken into account.This paper aims to address this issue, that is, how the dynamics of these two factors along with their interaction can be integrated into a QFD analysis. For simplicity, it is referred to as dynamic benchmarking methodology. This proposed methodology essentially comprises of two novel technical approaches. First, it is the use of an exponential smoothing-based forecasting technique, as described in Raharjo et al. (2009), to model the trend of the importance rating values and the competitive benchmarking information. Note that, in contrast to the traditional practice which mostly uses a direct rating scale of, for example, 1-5 or 1-9 (Hauser and Clausing, 1988; Cohen, 1995), the importance rating values and the competitors' benchmarking information are obtained using the analytic hierarchy process's (AHP's) relative measurement. Second, it is the approach, which is called the strength-weakness-opportunity-threat (SWOT)-based competitive weighting scheme, to derive weights by analyzing the interaction between the two factors. In addition, this proposed weighting scheme also serves as a more systematic way to substitute the traditional QFD customer competitive target setting and sales point value determination.The following sections are organized as follows. Section 2 will describe the need of dynamic benchmarking in the QFD based on what have been done in the literature. Then, the proposed benchmarking methodology is elaborated in Section 3. The AHP pairwise comparison is used as the main tool to elicit the knowledge, opinions, and judgments of the customer on the two factors mentioned above. To illustrate how the proposed methodology works practically, an illustrative example is provided (Section 4). Section 5 will elaborate how the competitive weighting scheme is used to derive the final customer requirements' weight, namely, the strategic importance rating (SIR) (Chan and Wu, 2002), by considering the two factors' interaction. Finally, the novel contribution and possible extensions are discussed (Section 6).
2 The need of dynamic benchmarking: literature review and research gap: A benchmarking process can be regarded as a continuous and proactive search for the best practices leading to a superior performance of a company (Camp, 1995). Successful benchmarking may lead to an improved return on investment ratio, increased market competitiveness, cost reductions, higher chance of identifying new business opportunities, and enhanced transparency and performance (Ramabadran et al., 2004; Braadbaart, 2007). It provides insights necessary to effectively pinpoint the critical success factors that set the most successful firms apart from their competitors, or to a greater extent, that separates the winners from the losers (Cooper and Kleinschmidt, 1987, 1995). Specifically, the benchmarking information can serve as a foundation for a company to formulate strategic decisions effectively (Spendolini, 1992).An important fact worth highlighting is that, as the passage of time, the company as well as the competitors' condition will certainly change. Therefore, benchmarking process should not remain static. The importance of dynamic benchmarking has been realized by several researchers. Min et al. (1997) used the AHP for competitive benchmarking and substantiated the need of dynamic benchmarking that is capable of evaluating the changing degree of clinic's patient satisfaction over time. In attempt to identify tools, methodologies, and metrics that can serve as enablers for making benchmarking in agile environments effective and efficient, Sarkis (2001) highlighted the importance of forward thinking (proactive) approach in benchmarking, such as by using forecasting techniques, on the basis of historical data, to obtain future benchmarks.Min et al. (2002) analyzed the changing hotel's customer needs over time and demonstrated the importance of dynamic benchmarking to strive for continuous service quality improvement. Unfortunately, they only focused on two data points in time, namely years 1995 and 2000, which is very likely inadequate for observing the change over time. Salhieh and Singh (2003) proposed a dynamic framework using principles of systems dynamics to incorporate benchmarking for university effective policy design. However, their approach can be considered "reactive" since they relied on a feedback mechanism. Tavana (2004) proposed a dynamic benchmarking framework, which uses the AHP and additive multiple criteria decision-making model, for technology assessment at NASA.In the existing QFD literature, the issue of benchmarking has been, to some extent, oversimplified. Some previous attempts can be found in Lu et al. (1994), Ghahramani and Houshyar (1996), Gonzales et al. (2005), Iranmanesh et al. (2005) or Ginn and Zairi (2005). Using a real world case study, Kumar et al. (2006) demonstrated that there is a synergistic effect in integrating benchmarking with QFD methodologies for companies that seek higher levels of financial and strategic performance through product improvement. Gonzales et al. (2008) demonstrated the effective application of QFD and benchmarking to enhance academic programmes. More recently, Lai et al. (2008) showed the importance of competitor information for deriving QFD's customer requirements ranking. With respect to this, they developed a new ranking method that is based on fuzzy mathematics.Nevertheless, almost none of the existing studies have adequately addressed the need to provide a more formal and systematic approach to use dynamic benchmarking in the QFD. As mentioned previously, the competitive condition may change during product-creation process, therefore how to appropriately deal with such change is of great necessity. Pursuing the "proactive" stream of research in dealing with the market dynamics first initiated by Shen et al. (2001) or Xie et al. (2003), this paper attempts to fill in this gap by proposing the use of a forecasting technique for monitoring, apart from the change of customer preferences, the change of the benchmarking information in the QFD.It is worth noting that the AHP relative measurement is suggested for deriving the competitive benchmarking information and the customer preferences. Examples of the use of the AHP-based approach for benchmarking can be found in Korpela and Tuominen (1996), Min et al. (1997), Chan et al. (2006), Chen and Huang (2007), Dey et al. (2008), Tavana (2008) or Raharjo et al. (2008). In the end, it is expected that having known the timely update of information on the change of competitors' performance and the change of customer preference over time, along with their interaction, the QFD decision-making process can be improved.
3 The proposed dynamic benchmarking methodology: This section describes the proposed dynamic benchmarking methodology for the QFD. Section 3.1 provides necessary information on how one may obtain the input data from the customer through the use of the AHP. Then, the step-by-step procedure to use the proposed methodology is elaborated in Section 3.2.3.1 The input
4 An illustrative example: To illustrate the proposed approach, consider the following example. Suppose that there are three DQs being monitored. It is assumed that the historical data for a period of nine months for the importance rating priorities and the competitive assessment priorities are already available. These data are generated from a simulation of AHP reciprocal matrices using the fundamental scale of 1-9. All the generated matrices have consistency ratio value of less than 0.1. For the sake of simplicity, it is assumed that there is no problem in Steps 1 and 2.4.1 The input
5 The competitive weighting scheme: a SWOT-based approach: Taking into account the interaction between the forecasted importance rating and the forecasted performance of the competitors, a competitive weighting scheme is proposed. The basic idea is to assign a multiplier to the DQ based on the forecasting results obtained from previous section. As compared to the traditional QFD, this approach may provide a more formal and systematic way for QFD practitioners in carrying out the customer target setting and sales point determination in the house of quality.The proposed competitive weighting scheme is based on the idea of SWOT analysis. The framework is shown graphically in Figure 6. The x-axis denotes the forecasted competitors' relative performance (CRP), while the y-axis denotes the forecasted importance of a customer need (DQ). Note that both axes use relative values as a result of the AHP procedure. The weighting scheme can basically be divided into four groups as follows:1. Strength. This case is for the situation when both the future CRP and the future importance are rather low. In other words, the competitors are doing relatively worse than the company on a relatively not so important attribute. Thus, a multiplier value of 1 is assigned to this type of attribute.2. Weakness. This is for the situation when the competitors are doing much better than the company on a relatively not so important attribute. Thus, a multiplier value of 3 is assigned. In the case when the CRP is equally good compared to the company, a multiplier value of 2 is assigned.3. Opportunity. This is for the situation when the future CRP will be relatively worse than ours on a very important attribute. Thus, this case is regarded as an opportunity for the company to differentiate itself from the competitors. A multiplier value of 7 is assigned.4. Threat. When the future CRP is relatively much better than the company on a very important attribute, then this signals a threat. The QFD practitioner should place a special attention to this case. Thus, a multiplier value of 9 is assigned. A multiplier value of 8 can be assigned for the case when the future CRP is equally good compared to the company's.For the in-between multiplier values other than described in the paragraph above, such as, 4, 5 and 6, a similar interpretation can be used accordingly. Take for an example, a multiplier of 6 is assigned when the future CRP is relatively much better than the company on a moderately important attribute. Table V shows the complete information on the proposed weighting scheme based on the x- and the y-axis.The "IR" here refers to the forecasted importance rating of the customer attributes, while the "CRP" stands for competitors' relative performance. A multiplier value (weight) is assigned according the IR and CRP level, for example, a weight of 2 is assigned when the IR level is low and the CRP level is medium. The "Note" column indicates the position of the weight in Figure 6, for example, "S-W" indicates that its position lies between strength (S) and weakness (W).Step 6
6 Conclusions: The aim of this paper was to demonstrate how one may incorporate both the dynamics of competitors and the dynamics of customer preference into a QFD analysis. The ultimate goal of analyzing the dynamics of these two factors as well as their interaction is to come out with better strategies when using the QFD for dealing with dynamic market, such as the consumer electronics market. Stalk and Webber (1993) wrote that:Managers, to be both effective in their work and, ultimately, successful in sustained competition, must continue to push their strategic thinking to keep pace [...] Strategy is and always has been a moving target.The importance of keeping pace with the change when formulating competitive strategies is precisely the main message of this paper.Compared to the previous research, this paper has extended the traditional QFD in three ways. First, it is the use of the AHP relative measurement in eliciting the judgments of the customer not only for the importance rating, but also for the customer competitive assessment. As explained in Section 3, a relative measurement may be regarded as a better approach to assess the competitive condition compare to an absolute measurement. This is because a "good" performance is, to some extent, determined relatively by the best-in-class competitors.Second, it is the incorporation of the competitors' dynamics in terms of the change of customer competitive assessment over time. A timely update of customer competitive assessment information can be very useful to continually evaluate the current performance, identify areas for improvement, and eventually set goals for the future. Another advantage of considering the dynamics of competitors is to tackle the change of competitors' performance during product-creation process as to avoid producing unwanted products or products that are more inferior than the competitors'. Third, it is the use of the SWOT-based competitive weighting scheme to analyze the interaction of both factors taking into account their dynamics. It is expected that using the weighting scheme may improve the accuracy of the DQ's priority, which in the end may increase the likelihood of success of a product design or upgrade process.The limitation of the proposed methodology is that it might take a certain amount of time and efforts to collect the necessary data over time. However, it might be justified considering the improved accuracy of the QFD results. It is worth noting that the data collection should be carried out in a specific customer segment.There are several areas worth pursuing in the next steps. From a practical standpoint, a case study to showcase the effectiveness of the proposed methodology is certainly of great value. One potential extension is to apply the approach in developing innovative products using the QFD (Miguel, 2007). From a methodological standpoint, some possible extensions to the current work may include an analysis of the subsequent steps in the QFD, that is, the continuation of the first house of quality to the next matrices or houses, a comparison study between the AHP-based approach and other methods for eliciting customer's judgments, and an application of the proposed methodology in service design context.
|
Ensuring environmental performance in green leases: the role of facilities managers
|
[
"Landlord",
"Tenant",
"Environmental performance",
"Green lease",
"Split incentive",
"Green lease schedule",
"Facilities manager"
] |
Summarize the following paper into structured abstract.
Introduction: Improving the environmental performance of commercial properties is very challenging due to the range of stakeholders involved in its operation; for example, tenants may not be interested in maintaining the original sustainability features of a building if there are no direct benefits for them. Further, traditional lease agreements would not recognise the importance of sustainability unless they are specifically included in the terms of the agreement. A green lease is an addition to the standard contract between the landlord and tenant in which environmental sustainability is formally recognised (Janda et al., 2017). In previous studies, key drivers for the uptake of a green lease were identified; these drivers included an increased rental return from the building; stakeholder demand for environmental accountability; energy savings; environmental responsibility to reduce carbon emissions; and water conservation (Rameezdeen et al., 2017; Burroughs, 2011). Collins and Junghans (2015) divided these drivers into voluntary motivators and obligatory motivators, with corporate social responsibility (CSR) identified as a powerful voluntary motivator for green leasing. The key barriers to the implementation of a green lease were also identified in these early studies; some of the barriers include uncertainty regarding new laws untested in the courts, the need for a green lease schedule (GLS) to be tied to an environmental rating tool, uncertainty surrounding the consistency of these rating tools, and the need for large investments for the retrofitting of low rated buildings (Rameezdeen et al., 2017; Burroughs, 2011).
Literature review: Green lease
Research methods: This research is exploratory in nature. Creswell (2009) observed that exploratory research is more suitable for situations with limited previous work on the subject area. This study approached the research topic by applying a qualitative method of inquiry; the aim was to identify incentive gaps and information asymmetries between the landlord and facilities manager of a green lease targeting GLS-based green leases that are popular in South Australia. Due to the relatively limited adoption of sustainable buildings in South Australia, a qualitative method was more appropriate and the exploratory approach enabled the collection of perceptions and experiences of those involved in green leases (Du Toit and Mouton, 2013). A sequential research design involving three stages as described below was utlised for data collection.
Findings: Effectiveness of a green lease
Discussion: In addition to the split incentives between landlord and tenant, this research confirmed the existence of an incentive gap between the landlord and facilities manager in a green lease, thus giving rise to a principal-agent problem aggravated by the asymmetry of information. These two incentive gaps form a typical 'double principal-agent problem' as illustrated in Figure 6. While past research has dealt extensively with the first incentive gap, very little is done on the second. Although the focus of this research is on the second incentive gap, the first gap cannot be ignored due to the expectations of a facilities manager to serve the tenant.
Research limitations: While the study makes several contributions to the field of green leasing, in particular to understand the operational performance of green leases, some limitations should be noted. The major limitation is the small sample size of the interviewees participating in the study; therefore, this qualitative study may not be fully representative of green leases implemented in Australia. The interviewees consisted both landlord representatives and facilities managers based in Adelaide, South Australia; therefore, the views expressed by these professionals would be biased toward the South Australian real estate and facilities management practices. In addition, Adelaide being a smaller market, the issues faced in larger property markets such as Sydney and Melbourne would not have surfaced in the discussion. However, the team took every effort to ensure objectivity in interview design by conducting six telephone interviews with property professionals from other states of Australia to test the conceptual framework and research tool prior to the main data collection stage. In addition, the interviewees were senior personnel in their respective organisations with extensive experience in that field, thus were well qualified to speak about green leases and sustainable building practices. Efforts were also made to explore additional details during the interview by asking probing questions to eliminate potential biases discussed above.
Conclusions: While facilities managers are striving to achieve sustainability in their operations, the topic of green lease is largely neglected in the extant literature. Ensuring operational performance, particularly regarding environmental efficiency, is the cornerstone of a green lease agreement between a landlord and tenant. The job of ensuring this performance rests mainly with the facilities manager who looks after the operation and maintenance functions of a building. As green leasing become more widespread, the future facilities managers will increasingly be exposed to this concept in their day-to-day operations. The literature has been highlighting operational performance of a building as a major barrier in implementing a green lease. As facilities managers are part and parcel of the team that is responsible for operational performance, research into the reason for such poor performance is highly warranted.
|
Healthcare managers' perception of economies of scale
|
[
"Economies of scale",
"Hospital size",
"Qualitative content analysis"
] |
Summarize the following paper into structured abstract.
Introduction: This paper presents findings from a qualitative study of how healthcare managers perceive economies of scale and the underlying mechanisms. Previous research on economies of scale in healthcare is dominated by quantitative approaches and does not draw on experiences and insights of professionals familiar with the phenomenon.
Methods: The exploratory nature of the research question lends itself to a qualitative research design. In this study, a number of decision makers in the healthcare sector have been interviewed, and the interview data has been coded and analysed.
Results: Scale effects are perceived at both department level and overall hospital or clinic level. The impact of scale and related mechanisms as described by the participants are summarized in the following section. When the participants described how scale affected the performance of healthcare organization (in particular their own workplace and previous workplaces), it was evident that the description was very much dependent on the operating mode (Lillrank et al., 2010) of their workplace. It was possible to identify three types of service contexts with similar operating modes and where the descriptions of scale dynamics concurred. The first type of service context was departments where patients arrive, meet with a doctor or other healthcare professional for a session of diagnosis, treatment, or follow-up, and then leave the healthcare facility (i.e. outpatient care). The second type of service context was surgical wards where patients are treated by a team of healthcare professionals. (The participants did not distinguish between outpatient day surgery and inpatient surgery.) The third type of service context was inpatient wards where patients are admitted for overnight care.
Discussion: Most mechanisms for economies of scale that are found in this study are in line with previous literature (e.g. Dranove, 1998; Gandjour and Lauterbach, 2003; Hayes et al. 2005; Lillrank et al. 2015). The geometric properties of processing capacity (proportional to volume) and cost (proportional to surface) were not reported in this study, but it seems natural that this is not as relevant for services as for process industries. Additional mechanisms found in this study include reduced vulnerability (from a staffing perspective) and improved recruitment and retention for economies of scale, and lower staff involvement for diseconomies of scale. These additional mechanisms are related to personnel, reflecting both that personnel cost represents a large share of total cost in healthcare operations, and that healthcare services are highly dependent on skilled professionals, which increases complexity. Spreading of fixed costs is the most commonly reported mechanism in the study, and as important as the fixed cost of facilities and equipment is the fixed cost of having doctors from all medical specialties available on-call at all hours.
Conclusion: Healthcare professionals perceive the effect of scale on performance to be very different in different types of healthcare services. They report many of the scale mechanisms that are recognized in previous literature and some mechanisms that are specific to healthcare delivery. While some mechanisms are reported for all types of services, others are exclusive to, or more significant in, one type of service.
|
Robust-optimum multi-attribute age-based replacement policy
|
[
"Replacement control",
"Maintenance",
"Costs",
"Age‐based replacement policy",
"Multi‐attributes",
"Robust design",
"Optimization"
] |
Summarize the following paper into structured abstract.
Introduction: The age-based replacement policy is known as the most common maintenance policy (Wang, 2002). This policy consists of replacing an item when it reaches a certain time of life, topt (the optimum replacement time), or when it fails, whichever occurs first. A common approach in determining topt is to consider the minimum cost. However, minimizing the cost is not the only goal of the maintenance activity. The goal of the maintenance activity is to support the production process with adequate levels of availability, reliability and operability at an acceptable level (Coetzee, 1997). It emphasize that all the attributes are significant and in determining topt, the decision should be based on the multi-attribute approach. Gopalaswamy et al. (1993) argued that a preventive maintenance policy based on a single criterion (the cost rate) is rather unreliable and forces the decision maker (DM) into a "corner" by making him/her take decisions based on the cost rate alone. Similar results were obtained by Azaiez (2002), who identified that, in the eyes of the DM, cost is not the only parameter in selecting the appropriate replacement policy, but other parameters such as the quality of the output, labor productivity and cash flow availability should also be considered. Furthermore, the problem becomes more complex when considering the fact that not all the aspects can be translated into monetary value. Failure will result not only in a tangible impact which can be directly captured as financial loss, but also in an intangible impact which cannot be directly captured as financial loss (Madu, 2005). Companies are realizing the importance of the intangible impacts (especially damage to the company's reputation) and, in spite of the difficulties, they are attempting to include them in the costing scheme (Schiffauerova and Thomson, 2006). It can be concluded that DM prefer to take a broader view of the problem and to visualize all the important aspects simultaneously, rather than treat one aspect in isolation (Cavalcante and de Almeida, 2007).In the age-based replacement model, a common assumption is the parameters for the model are known and deterministic. However, in practice, they must be estimated from a limited historical data (Kobbacy et al., 1997; Van Noortwijk, 2000). This estimation will bring uncertainty in the model. Mauer and Ott (1995) demonstrate that uncertainty in the cost attribute can significantly change topt. Halim and Tang (2009) construct a 90 percent confidence interval (CI) for topt of relays and found that the spread of the confidence bound for topt is large; the upper limit of topt is approximately four times the lower limit of topt.In the field of engineering design, one approach dealing with uncertainty in models is to use the robust design technique which is a philosophy popularized by Taguchi in the 1950s. A model solution is considered as robust if the output is less sensitive to the perturbation of variation in the model parameters. Thus the solution may not be the one that has the optimum output, but it is the one which has the lowest variance of the output.The objective of this paper is to develop a method for determining a robust-optimum interval of preventive replacement, a solution which is not only optimum, but also robust.
Overview of the proposed method: In this study, a method for determining an interval of preventive replacement is proposed based on a combination of the multi-attribute age-based replacement policy and the robust design technique. A brief description of this method is presented in Figure 1.The proposed method begins with the formulation of the optimization problem for the cost attribute, kh(t), and the reliability attribute, r(t). The next step is the normalization of the objective functions by using the upper-lower-bound method. Normalization is performed to avoid an incorrect solution point due to different orders of magnitude. The objective functions are combined into a single aggregate objective function (AOF) of the optimum multi-attribute age-based replacement policy, fopt(t), and an appropriate weight is assigned. The next step is to formulate an objective function for the robust multi-attribute age-based replacement policy, frob(t), by using the robust design approach. The problem is then formulated as a bi-objective optimization problem with the two objectives are the optimum multi-attribute age-based replacement, fopt(t) and the robust multi-attribute age-based replacement, frob(t). The next step is to assign a priority for each objective so that a bi-objective optimization problem with priority can be formulated. The interval for the replacement time (trob-opt) of the robust-optimum multi-attribute replacement policy is determine by using the Waltz lexicographic approach. First, the optimization problem for the objective function with a higher priority is solved. The acceptable tolerance (d) is assigned for the first objective function. Second, the optimization problem for the objective function with a lower priority is solved with respect to the acceptable tolerance assigned. After d is assigned for the second objective function, the interval for the replacement time (trob-opt) can be determined.
Description of the proposed method: Attributes of age-based replacement policy
Case studies: To illustrate the application of the proposed method, two case studies are provided. In the first case study, the historical data are limited and the cost ratio is not high. In the second case study, we have a sufficient amount of historical data and the cost ratio is high.Case study 1: the hydraulic hammer of a scaling machine
Conclusions: In the present study, a method for a robust-optimum multi-attribute age-based replacement policy is proposed. The proposed method can be used by DMs to determine the interval time for preventive replacement that provides a robust and optimum solution.
|
All the world wide web's a stage: Improving students' information skills with dramatic video tutorials
|
[
"Information literacy",
"Library instruction",
"Videos",
"Academic libraries",
"Marketing",
"Turkey"
] |
Summarize the following paper into structured abstract.
Introduction: Academic libraries have been using video, in various changing formats, for more than three decades as a means of library instruction and orientation. However, the recent expansion of the internet, and particularly of video-sharing websites such as YouTube, has resulted in a veritable explosion of online video as a method of communication and conveying information, and libraries have sought to keep pace with these changes. The project outlined in this paper was an attempt by Bilkent University Library, Turkey, to produce a series of videos that are at once both instructional and informative, but also serve to market the library to the wider university community. This paper will firstly explain the theoretical underpinnings of our project: why dramatic and online videos are an effective way of stimulating students' memory and learning. Secondly, we will consider the problems of promoting the videos and then of evaluating their effectiveness as pedagogical and marketing tools.Bilkent University was established in 1984-1986 by the late Professor Ihsan Dogramaci as Turkey's first private university. With the exception of a few departments and programs, formal instruction at Bilkent is predominantly through the medium of English, so most undergraduates are ESL students. Furthermore, admission to universities in Turkey overall is regulated by a national entrance examination, based on multiple choice questions, and as a consequence of preparing for this test, many new undergraduates have little experience of independent and analytical study and research. There is clearly scope for the promotion of information literacy among such students, and there is a definite role for academic libraries (Thornton and Kaya, 2010). In the light of this, Bilkent University Library is at present actively seeking to promote information literacy and library skills throughout the student body, through face-to-face instruction and by developing its use of Web 2.0 tools. The video project described here is a direct product of this work.
Literature review: As stated above, the main aim in producing the videos described in this paper was to facilitate student's information and library skills. The American Library Association, in a very well-known quotation, has defined information literacy thus: "To be information literate, a person must be able to recognise when information is needed and have the ability to locate, evaluate, and use effectively the needed information" (American Library Association, 1989). As will be seen below, most video projects that relate in some way to information needs, rather than a virtual library tour, have sought to facilitate locating information, usually by means of specific screen-casts that illustrate in some detail how to use a particular resource (e.g. Oehrli et al., 2011; Birch et al., 2010; Small, 2010). On the other hand, some videos, especially those with a dramatic element ("movies"), are normally more general in their purpose, and seek to illustrate the basic principles of using the library or locating information (Islam and Porter, 2008; Mizrachi and Bedoya, 2007). In addition to locating information, video can to some extent be employed to demonstrate ways of using that information effectively. For example, there have been a number of videos addressing the problem of plagiarism and how to avoid it, perhaps most notably the spoof "A Plagiarism Carol" produced by the University Library and the Department of Information Science and Media Studies at the University of Bergen (2010) (see also Kellum et al., 2011; Stanton and Neal, 2011). Most of the following discussion will, however, focus on videos that facilitate locating information.It is not the intention here to offer a detailed review of existing papers about academic library videos, as a useful annotated survey of library videos was published relatively recently by Islam and Porter (2008). Here we will review existing online library videos to determine what form they have taken, and secondly we will examine the underlying reasons why videos in general are potentially an effective medium for online instruction and developing information literacy, with special reference to memory and learning.What videos? A survey of existing library videos
Production process: The video project outlined here grew out of a series of short sketches acted out by reference librarians during information literacy workshops for first-year undergraduate students (ENG 102) held at the Library during the Fall Semester 2010-2011. The purpose of the sketches was to present in dramatic form a common reference "problem" faced by students relevant to the theme of the particular workshop. Each sketch involved one reference librarian playing "herself" and another playing the role of the student with the query or problem. These five-minute scenarios proved to be relatively popular with the students, both as a means of conveying the solution to the particular problem and as a way of starting the sessions on an informal and friendly tone. It was subsequently proposed that these sketches could be recorded and put online.This proposal was then presented to the Department of Communication and Design (COMD) in December 2010 as a possible student project and, after some discussion, it was assigned as a formal course project for a group of students in the Master in Fine Arts (MFA) program for the coming Spring semester (February-May 2011). During January 2011 reference librarians prepared a list of possible basic topics, which was soon reduced from 13 to seven themes, and draft scripts were then prepared by the librarians and sent to the MFA students. In early February 2011, we met the students, who recommended that the scripts needed to be more humorous and visual, and they argued strongly that the final videos should be watchable not only for library instruction but also for their own sake as pieces of entertainment. After an attempt by the Library to meet these recommendations, it was then suggested that the students themselves should prepare scripts that the librarians could edit according to their needs and priorities. The students prepared new scripts, now five in number, which involved a series of "interesting" characters coming to the Library with a particular problem which the librarian or library staff character would then help to solve. The students envisaged the videos as a coherent "series" - rather than as five separate and independent films - with certain characters appearing in more than one video. After some discussion, it was also agreed that the role of the librarian would be played by real librarians and it was therefore proposed that they attend a few "workshops" in March to develop their acting skills. Five librarians agreed to act, along with one security guard. The decision to use "real" members of the Library, and to film the videos in the Library itself, was partly to give future viewers a point of connection or familiarity, and also as a way of marketing the librarians. It was also decided to record the videos in Turkish (with English subtitles), in order to make them more attractive to our undergraduate and outside (walk-in) users. Costumes and other extras were made or acquired from various local sources, except for a special polar bear costume, which was purchased from the USA and was briefly delayed in Turkish customs! The students' scripts were edited by librarians: some elements, humorous in themselves, were changed as they involved, for instance, behaviour which we considered inappropriate for librarians. Filming was due to start in early April 2011, once the bear had made it through customs.The final list of videos was as shown in Table I.The videos were shot in the Library during April and May, with some inevitable disruption to the normal quiet of the reading rooms. "Fine cut" versions were ready for viewing by the end of May, and a few changes were suggested by librarians and members of COMD. The videos were shown to a group of students, librarians and academics on 1 June, when feedback was formally collected (see below). One video - General Rules - was subsequently shown during the annual LIBER Conference at Barcelona on 30 June 2011. The same video was also used by librarians during a series of library orientations for new undergraduate students between 16 and 21 September. The remaining videos were finalised during the summer, and uploaded to YouTube on 28 September for use at the start of the new academic year. The relevant URL is: www.youtube.com/playlist?list=PL821974BA0B9ABEF9
Results: Technically, the "results" of the process described above are of course the five online videos, and we will accordingly briefly discuss them here and consider the importance of collaboration in the development of the project. However, it should be stressed that the actual production of the videos is only the first stage of the project: making sure that our users watch the videos and, hopefully, learn from them is the ultimate goal. Therefore, we will also discuss the problem of promoting and evaluating the videos below.Broadly speaking, the five videos fall into two categories in terms of their main purpose. Firstly, there are three which seek to show how to locate information by searching relevant library resources: these videos focus on the basic principles of locating information rather than presenting in detail how to use a specific resource or database. Thus, the video entitled Starting Your Research is intended to stress the need for varied searching strategies when seeking information. In this video, a polar bear is seen fruitlessly typing at a computer terminal in the library. He is approached by a librarian and explains to her that his home has melted away and that he wishes to discover why. However, he cannot find anything relevant on the computer when he types "melting icebergs". The librarian verifies this and then proposes using alternative keywords/phrases, such as "global warming" or "greenhouse effect" instead. They then search for "global warming" in Turkish and are rewarded with results. The video Catalogue Search focuses on a supposed "treasure" belonging to the seventeenth-century sailor Christopher Newport - presented here as a pirate - and the attempts of a student to locate it, with the aid of a disembodied voice who gives him advice. Searching the library catalogue for Stevenson's Treasure Island and, with help, finding the book on the shelf, the student is finally informed that the real treasure is, of course, knowledge, and we discover that the disembodied voice is that of a librarian! The relatively short Finding a Journal seeks to outline the basics of the periodical collection, which is less frequently used by younger students. The video begins with Sir Isaac Newton sitting under a tree when an apple suddenly lands on his head: experiencing a "eureka" moment, Newton sets off for the library. He informs a librarian that he wishes to see the London Royal Society Proceedings, adding that he is himself a member of the Society. The librarian checks the system and explains the print and electronic holdings for this journal, and they then leave to find the print volumes on the shelves. On the other hand, the two remaining videos are less concerned with locating information and more with explaining specific library rules and policies. The video General Rules depicts a series of short scenes in which a librarian informs various bizarre readers (characters from the other videos) about certain library rules. Outside Users illustrates how non-Bilkent University users can enter the library and access its print and electronic resources by showing the clumsy attempts of an alien to join the library. This is an important topic as each year the library has up to 80,000 visits by students and academics from other Turkish universities as well as by members of the public.
Discussion: collaboration, promotion and evaluation: Library-student/faculty collaboration
Conclusion: We have argued here that dramatic online library videos, such as those described in this paper, are in theory an excellent medium for influencing young library users. An entertaining and humorous style especially serves as a means to facilitate memory and thereby encourage viewers to learn the instructional message of the videos. The format and style of such videos are also in tune with current "Web 2.0" tools and with the habits of young internet users today. The collaborative nature of this project furthermore served, in a small way, to develop the relations of the Library and librarians with the wider university community. Such marketing was one of the explicit aims of the project and it is hoped that the videos themselves will be popular and thus improve the image of the Library within the university. However, measuring this in terms of changing attitudes among students especially is difficult to measure, and this is even truer for the other main aim of the project, i.e. to improve the information and library skills of our users. It is impossible with any degree of certainty to determine in the future a causal connection between the instructional content of the videos and any improvement in the usage by students of library resources. Clearly, videos should not be used as the only method for academic librarians to promote information literacy but should be employed in conjunction with other means. It is likely, however, that such videos can make a contribution to the overall perception and usage of a library and its resources.
|
Interorganizational drivers of channel performance: a meta-analytic structural model
|
[
"Relationship marketing",
"Meta-analysis",
"Relational view",
"Interorganizational governance",
"Marketing channel performance",
"Political-economy analysis"
] |
Summarize the following paper into structured abstract.
1. Introduction: Identifying and managing interorganizational drivers for superior marketing channel performance are critical to a firm's success in the market. These interorganizational drivers not only result in greater value to customers through enhanced channel performance (Stern and Weitz, 1997) but also contribute to a firm's interorganizational competitive advantages and financial performance (Frazier, 1999; Weitz and Jap, 1995). A large body of channels literature over the past decades has examined how channel structure and channel coordination influence channel performance from diverse theoretical perspectives (Anderson and Coughlan, 2002; Frazier, 2009). For example, political-economy (PE), relationship marketing (RM) and interorganizational governance (IG) suggest various drivers of channel performance in terms of channel design, coordination, governance and relationship management, respectively. Although these performance drivers improve our understanding of enhancing channel performance, we need to further identify and manage the immediate performance drivers across theoretical perspectives to more efficiently enhance channel performance by determining the true effects of each driver.
2. Multiple theoretical perspectives of channel performance: In this section, we first review the effects of often studied channel performance drivers from three theoretical perspectives. Then, we propose a post hoc structural model of performance drivers aligned with the RV. Finally, we discuss the moderation effects. Figure 1 displays a model that summarizes these performance drivers with moderators.
3. Methodology: 3.1 Data collection and coding
4. Results: 4.1 Effects of performance drivers
5. Discussion: 5.1 Findings and theoretical contributions
|
The impact of social and contractual enforcement on reseller performance: the mediating role of coordination and inequity during adoption of a new technology
|
[
"Performance",
"Coordination",
"Mediation",
"Contractual enforcement",
"Inequity",
"Social enforcement"
] |
Summarize the following paper into structured abstract.
Introduction: Network theorists have observed that firms are embedded in a web of dyadic relationships that span geographies and industries (Anderson et al., 1994). Networks of business relationships are thought to be systems of information sharing and dissemination that could benefit the strategic interests of actors in the network (Hohenthal et al., 2014). Business network researchers realize that the actions of a dyad or one actor in the network may have a ripple effects on other actors in the network (Vahlne and Johanson, 2013). Researchers also note a shift from internally developed innovations to innovations developed within the firm to innovations developed by a network of firms (Nambisan and Sawhney, 2011). In the business-to-business (B2B) and channels of distribution literature, the emphasis on interorganizational coordination and joint decision-making has always been the focus of empirical research. The embeddedness of dyadic relationships in the network has been widely acknowledged (Anderson et al., 1994); however, practical difficulties in gathering network-wide data have been hampering empirical research. Reflecting the industrial marketing and purchasing (IMP) group perspective on the relationship atmosphere this empirical study examines the impact on a relationship when a radical change in interactions in the relationship takes place because of the adoption of e-business tools. More specifically, we demonstrate how coordination and inequity may play mediating roles with respect to the impact of social and contractual enforcement on reseller performance.
Equity theory: Equity theory deals with the norm of distributive justice in dyadic relationships and reflects the desire of members of a dyad to have a fair distribution of benefits in a dyadic relationship (Adams, 1963; Huppertz et al., 1978). In marketing, equity theory has been applied by Huppertz et al. (1978) in the context of a retail exchange situation to examine price inequity perceptions and consumers' intentions to resolve perceptions of inequity. Channels research suggests that equity perceptions impact satisfaction/dissatisfaction with a relationship (Frazier, 1983), relationship quality (Kumar et al., 1995) and relationship continuity (Scheer et al., 2003). Following prior research, we distinguish between overall perceived inequity within a relationship and issue specific perceived inequity (Kumar et al., 1995). In this study, we focus on the reseller perceptions of specific inequity regarding online arrangements with a manufacturer, specifically the adoption of Web-based software to streamline communication between manufacturers and resellers.
Methodology: Data collection
Data analysis and results: The data analysis follows a standard procedure in structural equation modeling recommended by Anderson and Gerbing (1988). First, using Amos 5.1 software, a confirmatory factor analysis with 21 items was conducted to statistically assess the discriminant and convergent validity of the five constructs in question. Means, standard deviations and correlations among constructs are provided in Table II.
Managerial implications: This research reaffirms the importance of developing strong business relationships with downstream channel partners and demonstrates that relationship norms affect both directly and indirectly the success of the downstream partner. Essentially, cultivating good relationships can help the manufacturers to motivate resellers.
|
The effects of value appropriation strategies in channels on intangible firm value
|
[
"Value appropriation",
"Marketing channels",
"Channel integration",
"Channel compression",
"Franchise",
"Intangible firm value"
] |
Summarize the following paper into structured abstract.
1. Introduction: Value appropriation is an important goal for strategic marketing planning (Mizik and Jacobson, 2003). Few studies have explored the value appropriation role of marketing actions and programs, except appropriability regime in product innovation management (Hauser et al., 2006; Srinivasan, 2006), advertising (Mizik and Jacobson, 2003) and sales force (Balboni and Terho, 2016; Blocker et al., 2012). Despite the achievements, the research on marketing strategies (e.g. channel strategies) from a value appropriation perspective remains very limited and warrants further research in business exchange (Ellegaard et al., 2014).
2. Mechanisms of value appropriation in channels: 2.1 Profit appropriation and resource appropriation
3. Hypotheses development: The level of value appropriation in marketing channels is determined by the effects of various value appropriation strategies used in channels (Levin et al., 1985). This section aims to link value appropriation strategies in channels with intangible firm value by illustrating how they drive the profit appropriation and resource appropriation processes. Figure 1 presents the theoretical model of this study.
4. Methodology: 4.1 Research context and data sources
5. Results: 5.1 Effects of value appropriation strategies
6. Discussion: 6.1 Findings and theoretical contributions
|
Resale pricing as part of franchisor know-how
|
[
"Franchising",
"Know-how",
"Resale pricing",
"Knowledge management",
"Capabilities",
"Pricing strategy"
] |
Summarize the following paper into structured abstract.
1. Introduction: Franchising has been growing in many countries and in many industries (Hoy et al., 2017). Even though US fast-food franchise chains are the archetypal franchise chains, franchising is neither limited to the US market nor to the fast food industry. Rather, it is steadily developing worldwide and in many industries, such as clothing, supermarkets or homecare services.
2. Literature review: 2.1 Resale prices in franchising
3. Methodology: To assess if resale pricing can be considered part of the franchisor's know-how, we conducted an empirical study using a qualitative approach. This empirical study dealt with the French franchise market. France is one of the leading markets in franchising in Europe with 2,004 franchisors, 78,218 franchised stores, EUR67.80bn of total revenues (French Franchise Federation, 2020). Franchising has been continuously growing in France in both the retail and service industries. Its international dimension should be highlighted, too, as France is not only attractive to foreign franchisors (e.g. Burger King, Five Guys, Papa John's, Pita Pit in the fast-food industry), but also exports many franchise concepts to foreign countries (e.g. Brioche Doree, Carrefour, Ibis).
4. Findings: 4.1 Recommended resale prices as part of the business know-how
5. Discussion: 5.1 Summary of findings
|
Clinical governance, education and learning to manage health information
|
[
"Clinical governance",
"Health informatics",
"Quality improvement",
"Education",
"Complex adaptive systems",
"Health services",
"United Kingdom"
] |
Summarize the following paper into structured abstract.
Introduction: Policy documents introduced clinical governance as a mechanism by which the public can be assured that NHS organisations have comprehensive and robust systems in place for continuously improving the quality of their services and safeguarding high standards of clinical care (Department of Health, 1997). Clinical governance encompasses two distinct elements: the mechanistic element of ensuring systems are in place, and the more philosophical element of producing an environment in which clinical quality can flourish (Department of Health, 1998, 1999). The first element assumes that cause and effect can always be discovered, predicted and controlled. The second element assumes that organisational culture will foster new perspectives and insights into problem solving solutions and innovations. The report into events at the Paediatric Cardiac Unit in the Bristol Royal Infirmary suggests that change "can only be brought about with the willing and active participation of those involved in health care: the public, patients, health care professions, trusts and health authorities and government" (Kennedy, 2001). Each group may have a legitimate, but different, interpretation of quality that sometimes result in a complex mix of conflicting goals.Proponents of clinical governance, characterised by authors such as Scally and Donaldson (1998), Dunning and Agnes (1999), Pringle (2000) and Nicholls et al. (2000), collectively claim that it provides a framework and support for quality improvement activities, drawing parallels with the concept of corporate governance by emphasising the accountability aspects of the clinical governance agenda. Donaldson (1999) suggests the development of clinical governance consolidates the quality agenda, through presenting one strategic direction. In response to claims that initial policy documents lack clarity about the meaning of clinical governance. Authors characterised by Nicholls et al. (2000, p.175), attempt to clarify the components of clinical governance, which are:* patient-professional partnership;* clinical effectiveness;* risk management effectiveness;* patient experience;* communication effectiveness;* resource effectiveness;* strategic effectiveness;* learning effectiveness;* systems awareness;* communication;* ownership;* leadership (Ellis, 2008; based on Nicholls et al., 2000).Conversely, critics of clinical governance, characterised by Loughlin (2002) and Goodman (2002, p. 244) suggest issues relating to quality in the NHS include a lack of resources: "The NHS has fewer doctors and fewer nurses than the health systems of almost every comparable country"; and "lack of clarity about its true meaning and nature...allows policy makers to shift responsibility for the problems of the health service onto the workforce..." (Loughlin, 2002 p.229). The introduction of clinical governance policy (Department of Health, 1997) was thought to have provided a distraction from "the core difficulties of the NHS while at the same time increasing management control of staff" (Goodman, 2002 p. 244). Clinical governance policy introduced a systematic monitoring system, based on a greater degree of control and accountability (Department of Health, 1997). Dunning and Agnes (1999) describe this accountability as including an individual responsibility to work within explicit standards of professional conduct and performance; engaging in continuous professional development and working in a way consistent with the corporate values and the strategic objectives of the organisation. "Incentive to behave" (motivation) is a characteristic of social learning (cognitive) theories that provide educators with "more effective behavioural interventions than hitherto have been available" (Rosenstock et al., 1988, p. 175).Underpinning quality improvement strategies is the axiom that poor performance typically reflects wider "system failure" (Berwick, 1989).There are fundamental questions to answer in response to the theme of this paper. First, what is health informatics?
What is health informatics?: There are many interpretations of health informatics, each representing a different perspective. The document, Making Information Count (Department of Health, 2002) attempts to define health informatics as "The knowledge, skills and tools which enable information to be collected, managed, used and shared to support the delivery of healthcare and promote health." On this basis, health informatics is not exclusively the concern of technologists and enthusiasts but is of relevance to all those who generate, retrieve and use information and technology to support health care. The ongoing challenge for commissioners and providers of education is to embed health informatics into all clinical and non-clinical educational and training programmes as far as possible, to help health care staff manage information better in a world that is expecting more "information empowered" professionals (NHS Connecting for Health, 2009).In the NHS Next Stage Review (NSR) (Department of Health, 2008), Lord Darzi set out his vision of an NHS with a focus on quality as the organising principle. The report highlights that in the twenty-first century, the NHS faces a particular set of unavoidable challenges, summarised as:* rising expectations;* demand driven by demographics;* the continuing development of the "information society";* advances in treatments;* the changing nature of disease; and* changing expectations of the health workplace.One problem for informatics that supports clinical practice is the tension between local specialism "the way we do things round here" and approaches that seek to standardise, recognising that outputs may be of interest to one or more stakeholders and the need to reduce asymmetry of information. To this end, educational initiatives such as Learning to Manage Health Information (LtMHI) first developed in 1999, seek to provide a common framework in health informatics for clinical professionals to promote a common language and currency. The document emphasises that informatics is now an integral part of contemporary clinical practice by considering the three principal areas of activity in health care: working with the patient; recording the patient contact; reflection and learning. The framework reflects key assumptions and guidance developed in consultation with a wide range of stakeholders who are concerned with commissioning, developing and delivering clinical educational programmes, which encompasses the following themes:* essential information technology skills;* communication;* health and care records;* The language of health: clinical coding and terminology;* data, information and knowledge;* protection of individuals and organisations;* clinical systems;* ehealth applications.The 2009 edition of LtMHI has continued the process in light of developments in clinical practice and technology.The reported findings of an in-depth longitudinal study identify a composite model of intersecting themes that goes beyond controls, compliance assurance and archiving of corporate policies and protocols, to enable non-hierarchical, exploratory models of problem appreciation and problem solving by a plurality of stakeholders (Ellis, 2010). Clinical governance is viewed as a negotiated, social activity rather than a fully codified, legislated rule set. This paper is an extension of this view, with a particular emphasis on factors relating to the development of educational programmes and health informatics. An emphasis on the responsibilities of individuals highlights the importance of lifelong learning and role of health informatics in actively managing individual performance. Complex adaptive systems and social learning models are a different way of thinking about complex situations, which consider the conditions that contribute to the environment that such situations operate within that may include social, political, technological and financial influences.
Systems based thinking and its relation to learning theories: Open systems theories evolve out of the work of Bertalanffy, a biologist, which takes into account the dynamic whole of the organism, its interaction with its environment and permeable boundaries (Flood, 1999). Second-order systems based thinking moves away from simple objective observation to understand humans as participants in systems that allows for the flow of energy (motivation, information and innovation) and networked interactions. The origins of learning theories can be traced to Lewinian field theory (Lewin, 1936, 1947); cognitive theory and humanistic psychology (Knowles, 1984; Brookfield, 1986; and Atkinson et al., 1996). According to the cognitive perspective, learning is an organisms' ability to represent aspects of the world mentally and then operate on these mental representations rather than the world itself (Atkinson et al., 1996). Learning theories identify the role of feedback in sustaining and improving human performance at work that involves single loop and double loop learning, associated with proactively challenging and influencing a range of different or conflicting perspectives (Senge, 1990). Similarly, learning theories suggest exploratory models of problem appreciation and problem solving by a plurality of stakeholders that reflect practical day-to-day concerns, relevant to participants' daily working lives and activities (Kolb, 1984; Hayes, 1995; Pendleton, 1995). The importance of education and professional support that focuses on experiential learning as a tool to change behaviour is identified (Schein, 1985; Schon, 1983; Schon and Rein, 1994; Berwick et al., 1992; Berwick, 1996, 1998). Kolb (1984) identifies an experiential learning model that has four phases:1. concrete experience;2. reflective observation in which the learner rethinks through what has occurred;3. active experimentation;4. abstract conceptualisation, in which the learner normalises the processes and knowledge.The model is premised on changing the basic assumptions "the way we do things round here" that result from past learning. Learners go through a cycle in which they acquire knowledge, assimilate, experiment and then normalise the learning into their daily work informed by "plan-do-study/check-act" feedback loops (Senge, 1990). Contemporary learning models manifest an understanding of the need for skills and knowledge to be embedded in experience, and allow reflection on that experience to create new meaning and enduring changes in behaviour.Social learning theories emphasise the role of expectations held by the individual. "Behaviour, in this perspective, is a function of the subject value of an outcome and of the subjective probability (or 'expectation') that a particular action will achieve that outcome" (Rosenstock et al., 1988, p. 176), suggesting that behaviour is determined by expectancies and incentives. In this context, underlying trends influencing management approaches to quality improvement programmes within primary care include:* Explicit rules and regulations supplementing the implicit codes governing professional/patient relations (Baker, 2000, 2001; Baker and Grol, 2002).* Development of a balance of power between various judgements on quality (Ferlie, 1994; Ferlie et al., 1996).* An increasing status of the GP within health services (Meads, 1996; Rigby et al., 1998).Davies and Mannion (1999, pp. 247-8) write that the following developments led to the increasing importance of quality towards the end of the twentieth century:* An increase in the evidence-base of what worked in clinical practice.* Sophisticated data systems and the expertise to interrogate them.* Widespread variation in clinical practice and outcomes.* Cost cutting by managers with apparent less regard for the quality of care.NHS quality improvement programmes provide operational frameworks that incorporate various mechanisms to help bring about clinical governance through regulation, incentives, continuing professional development (CPD), peer review and organisational quality improvement methods.While the above theoretical models begin to account for the significance of the interactions between human participants and educational initiatives, they fail to address the nature of clinical governance. The next section introduces the case studies.
Methodology: The longitudinal study, located within the English National Health Service (NHS) between 1999 and 2005, is case study based using a multi method approach to data collection within two Primary Care Organisations (PCOs). The research strategy is conducted within a social constructionist ontological perspective. This approach contextualises clinical governance, the trend towards collaborative partnerships and federated models of practice, enabled by developments in primary care informatics (Ellis, 2010).Limitations of case study methodology include a tendency to provide selected accounts. These are potentially biased and risk trivialising findings. Rooted in specific context, their generalisability to other contexts is limited by the extent to which contexts are similar. One researcher's own interpretation of reality, as a social construction, may not resonate with that of another. Reasonable attempts were made to minimise any bias. The diversity of data collection methods used in the study was an attempt to counterbalance the limitations highlighted in one method by strength from alternative techniques. The methods used to collect and analyse the data from a range of sources, including respective strengths and weaknesses, are illustrated in an overview of the case study methodology in Figure 1.
Choice of case study sites: The choice of case study site was determined by the purpose of this research study; the convenient sampling of PCO case studies focused on established communication links and close relationships with participants facilitating access to undertake this study as the researcher worked within the NHS case study localities. This provided a number of advantages in the study that included access. The researcher was also aware of a down side to this privileged position in relation to the potential to introduce bias. Reasonable attempts to minimise any bias were made. For example, a 100 per cent sample was applied to postal survey in each case study; in addition, volunteer interviewees were requested, issuing a letter and information pack to each case study clinical governance committee, to each practice and primary health care team. The letter informed potential respondents of the role of the researcher and the purpose of any involvement. It was stressed in both the written material and with verbal reassurances that there was no obligation to participate. It was emphasised that refusal to be involved, or any matter divulged during interview would not jeopardise the working relationship between the researcher and participants. The researcher's participant posture was necessary in order to gain first-hand experience of the workings of clinical governance. It can be argued that this suggestion facilitated the development of a rapport that improved the amount and depth of empirical data collected counterbalancing negatives associated with this approach, such as those associated with the potential for bias.The findings are not intended to be generalisable to disparate contexts. As described earlier, the heterogeneous factors of English NHS primary care suggest the importance of context. Approaches that seek to emphasise generalisability are unlikely to provide sufficient description of specific local organisation and context. The study captures the experiences and perceptions of participants involved locally in implementing clinical governance and coordinated actions that include establishing educational programmes to support learning to manage health information. The sampling strategy, therefore, is purposive in nature.General characteristics of the studied, geographically linked, PCOs are described next. The first PCO: semi-rural with approximate population of 70,000, 11 practices and multidisciplinary primary care teams including 37 GP, was established in 1999 as a level 2 Primary Care Group (PCG), took devolved responsibility for managing a healthcare budget of PS618 per capita. PCO boundary coterminous with relevant Social Services Department.The second PCO - urban, approximate population of 150,000; one acute trust; 25 practices and multidisciplinary primary care teams including 84 GPs, was established in 1999 as a level 2 PCG; took devolved responsibility for managing healthcare budget of PS665 per capita; Coterminous PCO boundary with Social Services Department.The case study design led to a mix of quantitative and qualitative data - the type of data led the plan of analysis, each type of data being analysed separately. For example, survey responses allowed analysis of attribute data as to the strength of agreement, or disagreement with statements. The attribute data was subsequently presented as values of particular variables that were named and defined with corresponding data input using an incidence data matrix. A total of 30 in-depth interviews were conducted with self-selecting volunteers from multiple disciplines that revealed dimensions of approaches, perceptions of clinical governance and learning programmes from those actively engaged within the two studies PCOs. Interviewees consisted of 17 males, 13 females; GPs, clinical governance lead, chief and assistant executives, managers, pharmaceutical advisor, nurses and CPD team members. Following transcription, text was uploaded into software application (Atlas TI(c) - a registered trademark of Scientific Software Development, Berlin). The central analytical approach adopted in the development of themes and categories was open coding, derived from interviewees' own words.The following section introduces complex adaptive system (CAS) theories that are a valuable tool to help make sense of natural phenomena, which include human responses to problem solving within organisations.
Complex adaptive systems: A complex adaptive system (CAS) approach, in this context of this paper, is interpreted as a framework that assists in thinking about the nature of quality improvement and learning programmes. Drawing on the literatures of CAS it is argued that the governance of quality improvement is based on three propositions. The first proposition: the meanings attributable to the explanations available to a PCO for achieving quality improvement are multifarious. The argument: PCOs operate in a complex network of general practices, Primary Health Care Teams, Social Services and other local agencies, each of which has some influence on the governing activities of quality improvement. Empirical support for this proposition will be found in organisations that apply CAS principles that engender mutual recognition of common or complementary strategic agendas. The second proposition: the scope and influence of quality improvement programmes self-organise across each PCO. The argument: clinical governance systems update based on experience, any part can influence other parts through connectedness and interdependencies. Empirical support for this proposition will be found in the activities that include regulation; incentives, CPD, peer review, organisational QI methods and so on and interdependencies among each organisations change management programme. The third proposition: given the combination of clinical governance activities and information exchanges, patterns of collaborative behaviour exist in each organisation. The argument: within each change management programme are combinations of activities that distinguish an organisations response to the introduction of policy and ever-changing environment. Empirical evidence for this proposition will be found in improved symmetry between different levels of the system.Key elements and principles that characterise a CAS are introduced below (Reynolds, 1987; Kauffman, 1993; Gell-Mann, 1994). They form useful models of the types of social interactions between professionals looking to implement change (Cilliers, 1998; Anderson, 1999; Ellis, 2010).CAS Element - Multiple agents, different world-views
Results: The following broad-based themes will be discussed:* Mutual adjustment of a plurality of stakeholder perceptions, preferences and priorities.* The development of information and communication systems, empowered by informatics.* Emphasis on education and training to build capacity and capability.Multiple stakeholder perceptions, preferences and priorities
Lessons learnt: Arising from the results of the study there is a need to shift NHS policy makers thinking from a hierarchical command and control emphasis, which advises managers what to do to ensure that the organisation achieves goals in an optimum way. The experience of those described within the study conflict with the notion that performance is optimised when structures and processes are introduced based on the assumption that the quality of healthcare is predictable.The results suggest that quality improvement systems develop locally based on information, knowledge and experience exchange, any part of the system can influence other parts through networks and interdependencies. On this basis, a CAS approach accommodates coping tactics that emerge in recognition that paradox and anxiety are characteristics of systems that evolve. A key issue is that the study provides a particular focus on the need to develop practical skills, knowledge and competencies that are applied in the workplace, linking those involved with the implications of their actions and a wider dimension of organisational relations.
Recommendations: Central to the recommendations is the belief that the best decisions are based on the best information. However even equipped with the best information, decision makers need a range of professional skills and abilities in order to be able to utilise information in order to transform results. Educational programmes need to emphasise this central role of high quality information in the support of decision making and inspire positive change by bringing health informatics to life through innovation, research-informed approaches and real-world practicality.Figure 2 is a graphical exemplar that shows the ways in which an educational programme module can reflect external environmental influences in order to equip students with appropriate knowledge and skills of informatics to apply in their workplace.There is a need for educational programmes to demonstrate in particular:* Productive partnerships that encourage ownership of the educational programme to ensure it meets NHS needs by effectively involving NHS staff, student participants, service users, academic staff and other stakeholders in the design; ensure learning outcomes map to relevant professional bodies requirements and competency frameworks; utilises evidence-based, cost effective, proven innovative delivery methods and evaluation of the learning.* A focus on enabling students to feel confident with sustainable change, able to lead and innovate in their everyday work.* Recognition of the need to be flexible and responsive.There is a need to consider the benefits of eLearning. Apart from developing a generic understanding of systems found in practice, eLearning programmes can facilitate multidisciplinary training using complex scenarios and therefore promote team working and leadership development. ELearning is not simply cost effective but can address the need to ensure equality of opportunity for the whole range of students from all backgrounds. Examples of evidence include Larsen (1992) who found no differences in post-test scores based on learner style preferences and Kass et al. (1998) who found that using computer simulators actually eliminated a gender gap that was present when traditional learning was used. Hawthorne et al. (2009) found that eLearning was preferred by a majority of students while there was no difference in achievement compared to more traditional delivery of a module on cultural diversity as part of a clinical curriculum. Paechter et al. (2010) have shown that there are two aspects which contribute strongly to learning achievements and course satisfaction when using an eLearning delivery methodology; students' achievement goals and instructor support. ELearning based educational programmes can also have significant benefits for patients and carers; online resources can contribute to patient empowerment, enable self-management of chronic and short-term conditions and promote communities of support for carers (NHS Connecting for Health, 2009).
Conclusion: The study has highlighted that educational programmes, which support quality improvement empowered by health informatics, are increasingly less determined through bureaucratic lines of authority, more often through aspects attributable to an emergence of a combination of formulations. With a change in perspective comes the possibility of a different way of acting and relating. Various perspectives, presenting potentially conflicting views of quality have been described. The complexity of these perspectives can be partially attributed to a greater emphasis on inclusion of stakeholders that include patients, and the public in general in healthcare decision-making. This approach can contribute a degree of flexibility and resilience to problem solving capability within the whole system. On this basis, a CAS approach accommodates coping tactics that emerge in recognition that paradox and anxiety are characteristics of systems that evolve.The implications are that educational programmes need to ensure that participants are equipped to demonstrate the personal qualities and will have the required values and behaviours, key skills, and energy that will be required to provide a patient-led healthcare system. Participants need to learn to exploit the analysis and use of information within the current economic context.It is suggested that eLearning will play an increasingly important role in healthcare. It enables the rapid creation and dissemination of quality assured learning content and provides the opportunity for more flexible access to learning with sharing of learning resources across the NHS, including Social Care and the Education sector (NHS Connecting for Health, 2009).
|
Brazil's image and Brazilian personality: a systematic review from the viewpoint of cordiality
|
[
"Personality",
"Country image",
"Brazilian people",
"Cordial man"
] |
Summarize the following paper into structured abstract.
1. Introduction: In an age of increasing globalization, countries around the world face intense competition for development factors: exports, investment, tourism, students and skilled labor (Rojas-Mendez, Papadopoulos, & Alwan, 2015). Thus, to be successful, countries must be distinctive and improve their image (Hakala, Lemmetyinen, & Kantola, 2013).
2. Literature review: 2.1 Organizing the literature: theoretical perspectives and backgrounds of residents' personality
3. Methodology: A systematic review of the literature was carried out for this research. Churchill (1991) argues that one of the most efficient ways to better understand a problem is by searching the literature. In addition to the systematic review performed by collecting and analyzing scientific articles, secondary data were also obtained through the book Roots of Brazil by Sergio Buarque de Holanda for comparison and discussion of the content of those articles based on the concept of the cordial man.
4. Results and discussion: The results are divided into four parts. The first consists of the basic characteristics of the papers found in journals, the second covers the analysis of the definition of Brazilian personality in journals, the third part deals with the relations between the personality traits found in journals and the cordial man of Roots of Brazil and, finally, the fourth part presents an agenda of future studies relating the Brazilian personality to several managerial areas.
5. Final considerations: This research showed through a systematic review of studies about the Brazilian country image and its dimensions over a period of 16 years (2001-2016) that the main characteristics of Brazilian personality are: sensual, cheeky, cheerful, creative, hospitable, friendly and cordial. These were the ones that appeared the most in the articles found.
|
Pragmatic thought as a philosophical foundation for collaborative tagging and the Semantic Web
|
[
"William James",
"Charles Sanders Peirce",
"Pragmatism",
"Folksonomy",
"Collaborative tagging",
"Semantic Web"
] |
Summarize the following paper into structured abstract.
1. Introduction: When it first appeared, collaborative tagging, or folksonomy, was celebrated for the diversity of perspectives it was capable of representing and its expression of those perspectives through natural language. More recently, however, concerns about the ability of tags to help users find specific resources have led to proposals to rein in that diversity. Such proposals would threaten to mold folksonomies into, at very least, semi-controlled vocabularies, thus restricting the same traits for which they initially drew praise. Moreover, these limitations were proposed within less than a decade of folksonomies' first appearance.
2. Discussion: 2.1 A pragmatic view of the long tail
3. Conclusion: This study has served both as a reaction against recent efforts to control the diversity of folksonomies and a response to recent calls for philosophical foundations for the emergence of collective intelligence through the Web. It has proposed the pragmatic philosophies of William James and Charles Sanders Peirce as a foundation from which to argue for increased appreciation of the value of the full range of tags in a folksonomy and for allowing the variety of perspectives it encompasses to be articulated over time. As a result, the full potential of folksonomies can be realized in order to link resources and advance understanding.
|
Transforming the nature and scope of new product development
|
[
"New product development",
"Product development",
"Blue ocean strategy",
"New product failure rates",
"Market crowding",
"Corporate strategy"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.
Introduction: In 1976, Shelby Hunt commented on the nature and scope of marketing and in so doing, transformed how both business and non-business organizations market themselves and their product offers (Hunt, 1976). Hunt's examination changed how marketing was understood and broadened its scope to include a host of activities, services, and products that had not previously been viewed in marketing terms. As a result, non-profit organizations, government entities, political parties and society as a whole have benefited. Similarly, Theodore Levitt's (1960) classic article dealing with marketing myopia alerted marketers to the dangers of focusing too much on current customers, technologies, and competitors. Both articles can help product developers and brand managers to avoid the inevitable traps that are logical extensions of industry practice. Arguably, new product development could benefit from such an examination that refines its understanding, broadens its scope, and alerts marketers to relevant factors that can affect its success.
Current new product development: New product development is a well-studied process, is the lifeblood of numerous firms, and represents the best hope for future growth. Over the years, it has been refined with attention paid to the consumer (Herstatt and von Hippel, 1992; Hoffman et al., 2010; Fuchs et al., 2010), the development process (Cooper, 2009; Fuller, 2010; Hoyer et al., 2010; Sandmeier et al., 2010), the nature of the product (Decker and Scholz, 2010), the channel (Lan et al., 2007), the nature of the marketing venue (Fuller et al., 2009; Arakji and Lang, 2007), and the source of the product concept (Wyld, 2010). Despite the evidence of attempts at continuous improvement, the need for change still exists.One symptom of the need for change is the persistent level of new product failure. Very little historical data on product failure rates exists in the literature. Some, which in 2010 are ancient references, cite failure rates of 40-45 percent (Griffin, 1997), 33 percent (Booz, Allen & Hamilton, 1965), 80 percent (O'Meara, 1961), or 89 percent (Schorr, 1961). The most recent reference states the failure rate for consumer packaged goods at 49 percent (Barczak et al., 2009). During the last millennium, new product developers lamented the unacceptably high failure rates for new product introductions. Some sources claim the rate is as high as 95 percent (Schlossberg, 1990). Although recent references have not supplied metrics, they have concentrated on "improving" the process (Hlavacek et al., 2009). Still others cite the crowded competitive space and attribute failure to the clutter in the market place (Redmond, 1995). The data reflects the risks and costs of new product development. Even the costs in a single industry can be staggering: a conservative estimate places the cost of failed product R&D in the electronics industry at more than $20bn per year (Clugston, 1995). Those results prompted organizations and researchers to refine the new product development (NPD) processes. Researchers have explored the causes of new product failures and the literature reports significant attempts at reducing the risk inherent in new product introductions (Sarin and Kapur, 1990). Despite extensive work, new product failure rates appear to be increasing in consumer product categories (Redmond, 1995) with staggering economic and opportunity costs. One attempt to improve the NPD success rate focused on leveraging the brand equity of existing products. Others concentrated on increasing the accuracy of the process by employing new development techniques.
Line extensions: leveraging brand equity: Companies have sought to reduce their risk in introducing new products in various ways. Since firms learn from their successes, leveraging the success of an existing brand name has become a favorite. New product developers have refined the practice to include two separate areas (Farquhar, 1989). The first, a line extension, introduces a new product in the same product category as the parent. The second, a category extension, introduces a new product in a different product category.In the 1960s, introducing a product extension was a carefully considered decision. Companies were afraid that the unanticipated shortcomings of a line extension would damage the sales of the parent. Thus, Coca-Cola reduced its risk when in 1963 it introduced a diet cola named Tab. Essentially the product was Coca-Cola with an artificial sweetener. Sensitive to consumer reactions to taste, the company saw the potential for problems. Branding the product Diet Coke or Diet Coca-Cola carried risks. The fear was that consumers would not like the taste of the sugar-free "Coke", and reduce their consumption of the original. Only in 1982 after 19 years of Tab's successful performance did the company feel secure enough to introduce a branded Diet Coke. The decision was responsible and entirely correct. Coca-Cola Corporation's most valuable asset is its brand name and failure to protect it would have been irresponsible. Coca-Cola's careful approach reduced the risk of damaging the parent brand and validated the concept of line extension. History proved them correct. The new formula proved to be so well accepted that one year after Diet Coke was introduced, it surpassed the sales of its sibling, Tab. Research has shown that line extensions reduce the risk inherent in product introductions by allowing consumers to form expectations about the new product (Kim and Sullivan, 1998) and by leveraging the positive expectations about the parent brand (Keller and Aaker, 1992; Martin et al., 2005). Over time, critics of consumer products have lamented the proliferation of line extensions. However, the basis for their popularity is the increased chance of success and reduction of the potential for loss.In the soft drink industry, Coke's successful extensions have resulted in remarkable product proliferation. In fact, around the world there are over 30 different versions of Coke and Diet Coke. They encompass versions with and without sugar or with or without caffeine. They include new flavors like vanilla, lime, raspberry, and cherry and even include a version with added vitamins. Not to be outdone, Pepsi followed suit and introduced its own extensive lineup of parallel line extensions that cover and clutter the market.The result of these introductions was tantamount to an industry stampede. Competitors focused on increasingly smaller segments offering a unique small benefit to offer something new. For commodity products with little to differentiate them, consumers usually respond with variety seeking behavior. Thus the brand extensions make some sense by offering an alternative that is different and not owned by a competitor. However, since some of the extensions are easy to copy, one byproduct of line extensions is an increasingly crowded competitive space. Moreover, the proliferation of a series of line extensions may dilute brand equity. One more effect is to shift the attention away from consumers who do not want the selection of product offerings but desire something else. These consumers may comprise a viable market segment not served by others and untouched by the brand extension process.
Refining marketing metrics and development techniques: As product developers have developed their own craft, they have introduced a variety of refinements. They involve techniques aimed at the accuracy of information, like voice of the customer research (Brandt, 2008) and careful segmentation and product differentiation. Other refinements concentrate on making the NPD process itself more robust. Even today not every product is a success and careful product development and market testing is the rule. One consumer marketing example is Pepsi-Cola's 2001 development of its ill-fated coffee flavored cola (Bevnet, 2001). It had a catchy brand name - Pepsi-Kona, reflecting Kona coffee from Hawaii and sounding a bit like the parent beverage, Pepsi-Cola. The brand concept tried to take advantage of Starbuck's increasing success in revitalizing the lagging coffee drinking market. Among its target audience, i.e. adults, the brand seemed to have the potential to earn market share. Its sugar-free, diet version even boasted a taste indistinguishable with the sugar-sweetened parent, a rare accomplishment and potential strategic advantage. Pepsi's marketing staff is professional and they employ tested marketing metrics and techniques including the stage-gate process to conduct products through new product development (Cooper, 2009). In essence, the stage-gate process is a refined version of the new product development process. It conducts developers from ideas to prototypes to final products ready for release, and builds in failure points to weed out products with problems. Specifically, it starts at the discovery stage then proceeds in turn to scoping, building the business case, development, testing and validation, and finally launch. There is an additional stage, post-launch review. The stage-gate process inserts an evaluation between each stage that serves as a barrier to progress. Only if the project overcomes the gate obstacle, can the project move forward. Thus, the gates prevent projects that fail to "open" them from proceeding along to launch. As a result, new product developers can reduce their losses.The stage-gate development process has reduced new product failure rates. Using the metrics, Pepsi Kona did not complete marketing testing successfully and clearly did not exceed the threshold for a successful introduction. Ironically, tools like the stage-gate process may reinforce the pressure to extend product lines. They yield quantifiable metrics that can make decision making easy. In contrast totally new to the world products have few or no close analogs that may help forecast their acceptance. Their metrics can only be estimated.
The dashboard trap: As the marketing and product development professions have matured, their tools have become more sophisticated. Combinations of metrics, called marketing dashboards have been developed to increase efficiency and help guide decision-making (Pauwels et al., 2009). Dashboards are customized by individual companies to measure what management feels are the most important operational measures. Significantly, they have been linked to firm performance (O'Sullivan et al., 2009; Miller and Cioffi, 2004). Using appropriate dashboards, marketers can make intelligent decisions and monitor the results of those decisions. The metrics are so widely used that managers in a given industry are likely to develop similar dashboards and use them to guide their competitive actions. Companies can monitor their dashboards to sense competitive initiatives, changes in consumer preferences, and other factors. In some cases, the metrics help sense events with enough speed to enable an equally speedy response.However, when competitors use similar groups of metrics there can be a similarity in competitive tactics that can lead to countervailing reactions that stifle innovation and reduce success. That situation can be termed the "dashboard trap". In fairness, the concentration on metrics has saved costly mistakes. The result of the competition is an increasing tempo of moves and countermoves that kill off the weaker entrants and leave the survivors no option but to continue competing. The ever sharpening internal industry focus creates a marketing myopia that blinds competitors to escape routes from the destructive competition. New product development that maintains the focus within the industry segment offers no hope of escape. If market growth slows, profits may decline or disappear and the competitive carnage can be dramatic. In that case, the only escape may be to abandon ship with no lifeboat.
Environmental factors and competition: Product failure can be viewed as failure to build sufficient brand equity to succeed. Brand equity in turn is built on the consumer's perception of the brand, its company and its competitors. As the competitive playing field becomes more crowded, it becomes more difficult to gain the consumer's attention to create a favorable impression. One strategy to build brand equity is to reduce the clutter so that consumers can concentrate on a brand.Most research on new product failures has focused on a firm's activities in specific projects. However, recent literature proposes that new product outcomes are influenced by macro-level or environmental factors (Redmond, 1995). Environmental factors can explain why competent firms in one industry consistently experience higher failure rates than those of other competent firms that operate in a different industry. For example, when two different industries, consumer packaged goods marketing and industrial marketing are compared, one finds that failure rates for new packaged are consistently higher than those for new industrial products (Redmond, 1995; Barczak et al., 2009).While entrepreneurs have created new markets some of which are currently in the early stages of the product life cycle, most markets are established and maturing. In many established markets, supply is overtaking demand and competition is increasingly a battle for market share (Trombetta, 2009). Market share competition reduces profits and the potential for growth.One empirical finding is that using line extensions is a common technique that food companies use to achieve sales growth. Redmond (1995) notes that consistent focus on line extensions creates market clutter. Clutter makes it difficult to achieve significant differentiation, to attract a viable market segment, and to thrive. Thus, crowded market space generates the equivalent of natural selection in which only the strongest brands survive. In contrast, industrial marketers face a much less cluttered field of competition and introduce relatively few new products. In the industrial marketing case, opportunities for differentiation and segmentation increase the chances for success.
Gaining room to compete - value innovation: Recognizing the effects of market crowding is important for new product developers. However, implementing strategies to overcome those effects is arguably more important. Groundbreaking research by Kim and Maubourge (2005, 2007, 2009) proposed several strategies for avoiding the negative aspects of competition by seeking value innovation. Instead of following the straightjacket of traditional industry rules, the creators of blue oceans rejected their competitors as their benchmark. Their actions made competition irrelevant by creating a leap in value for both buyers and the company - value innovation (Kim and Mauborgne, 1997). This perspective offers the hope of escaping destructive competitive market space for a new environment with more opportunities.
Blue oceans - red oceans: The perspective, blue ocean strategy, has not been reported in connection with new product development in the literature. Instead, it is focused on company strategy. This is ironic, since it inherently affects the nature and scope of product and service offerings and appears to be directly applicable to NPD. Blue oceans differ from the traditional red oceans in which most competitors operate. Red oceans are the result of blood-letting competition that figuratively stains the water red. Red ocean strategy is based on competitive positioning and market players following the competition. Three major elements typify this strategy. The first is competing on the same factors as the rest of the industry. The second is accepting existing industry boundaries and rules. The third is a focus on exploiting existing demand (Trombetta, 2009; Kim and Maubourge, 2005, 2007). In essence, lured by nearby targets, the firm is incarcerated within its industry.In contrast, blue oceans are uncrowded market spaces in which marketers can escape from destructive competition. They are based on the premise that the competitive game can be changed and the result can be the value innovation that Kim and Mauborgne describe. Value innovation is also comprised of three elements. The first is to reconstruct the elements of value. That is tantamount to new product and service development. The second element challenges industry operating rules and prompts competitors to look across the industry boundaries to new opportunities, instead of employing NPD in the old competitive space. The third element creates new demand from non-customers, new revenue streams or other sources. The results convey considerable freedom but require breaking away from familiar and successful patterns.
Using blue ocean strategy - the strategy canvas and the four actions framework: The heart of applying blue ocean strategy is analyzing the relationship of the company to its customers and its industry. Firms provide value to grow and compete. Providing the right value to the right customers in ways competitors cannot duplicate is one key to success. Blue ocean proponents advise creating a strategy canvas, a graphical depiction of three factors: the industry factors of competition (the x-axis), the relative level of the factors that a competitor supplies compared to the industry (the y-axis), and the competitor's strategic profile (the line that connects the factors).One example of its use is in the women's clothing industry. Unlike department stores or discount retailers like Wal-Mart, fashion retailers offer stylish clothing at high prices targeted to young women with disposable income. They depend on substantial advertising and offer convenient access in the form of location and access to parking. Clothing Vault (CV), a fashion retailer based in California, recognized the crowded market and sought to find new competitive space. It started by looking at non-customers, fashion-conscious girls with limited budgets. CV learned that the target audience, girls without much disposable income who want a fashionable wardrobe, get one by borrowing clothes from friends. That insight prompted a novel concept: clothing sharing. The first step was to chart the relative level of attributes supplied by firms in the industry. It created a strategy canvas for fashion retailers shown in Figure 1. In relative terms, price (high price), advertising, and convenient access are offered in higher quantities than assortment.Careful research uncovered more about the non-customer. The new social networking trend seems common to the age group known as the Millennials, including both customers and non-customers. Thus, websites such as MySpace, Facebook, and SecondLife were very familiar and of importance to the segment. It was clear that such social networks are an important means of communication, creating word of mouth and supplanting traditional advertising. In addition, research also uncovered the most relevant definition of convenience: easy location of the right style and the right size. Retail stores are arranged to maximize the visibility of specific styles that are grouped together. Physically searching through clothing racks is less convenient than reading through a list using a computer. The canvas makes it possible to take one or more of four basic actions that can be used to create a value innovation. The four actions are eliminate, reduce, raise, and create, and are detailed in Figure 2. The factors are similar to product attributes like cost, convenience, new versus used, accessible parking, and anything else that is relevant.Companies have used the canvas to assess where they stand versus competitors and in relation to consumer desires. By adjusting the levels of the factors they provide, they can often satisfy customers who are currently being served by others. By surveying potential customers and current non-customers, CV gained insight into what level of each attribute the non-customers desired.* The primary attribute they desired was affordability in the form of low cost.* Ambience was not important and a warehouse atmosphere that delivered "fashion at a fraction" - low cost clothes with the full measure of style - would be acceptable.* Convenient location, at or near a local mall, was not necessary; non-customers would go out of their way to save.* Service, while valued, was not worth paying for. Instead, the ideal of being able to find the exact size and style in a short time was important.* The target audience did not really value assortment. It seemed important but they just wanted to find the items that they wanted.* Parking was identified as important but not highly important. Subjects who were used to shopping at malls with parking lots tended to assume that it would be provided.* Social networks played an important role in the daily lives of the subjects. They became almost a substitute for e-mail. In fact, posting pictures on social network pages has reduced the need to attach a picture to an e-mail. Instead of sending it to a friend, the friend could use the social network page and just find it.* The final attribute, selection convenience, was rated as important. Making it easy to find the right garment would be highly valued.Figure 3 depicts the differences between Clothing Vault (CV) and the rest of the industry. The y-axis scale reflects the original differences in Figure 1, but the absolute values offered by CV and the rest of the industry compresses the scale. By saving on several expensive elements including advertising and substituting several inexpensive attributes including social networking, CV has achieve significant cost advantages while serving its target audience very well.
The six paths framework: Blue ocean strategists provide a tool to create new market space: the six paths framework (Kim and Maubourge, 2009). Instead of competing within a given market space, firms and product developers are encouraged to compete across them. The six paths refer to the six boundaries that constrain competition. They are:1. alternatives within industries;2. strategic groups of customers;3. buyers (as distinct from purchasers, deciders and users);4. scope of product and services (including complements);5. function-emotional orientation (the balance between functionality and emotional connection); and6. time (the logical extension of a trend).Figure 4 depicts the paths. Together, they present marketers with six distinct opportunities to develop new offerings and escape to uncluttered markets. Each one allows new product developers to create a value innovation that will satisfy customers and be insulated from competition. One can apply the four actions framework to each path to create new offerings. In each case, considering these paths opens new possibilities for competition and new opportunities for blue oceans.In most cases, marketers are challenged to focus on the customer and determine what values are missing. Blue ocean strategy is a change in approach that forces developers to look beyond their tested methods of discovering product opportunities. Some of the best methods carry a bias bred of past success. Even the tested technique of listening to the voice of the customer (VOC) (Brandt, 2008; Cooper and Edgett, 2008) carries with it the danger of functional fixedness that focuses too closely on the current consumer. VOC techniques work; marketers do get a deeper understanding of customer wants, needs, and challenges. That may also be a problem. In the typical red ocean competitive arena, focus on customers and successful solutions to their needs led to the clutter of line extensions. Focusing on groups like non-customers, underserved strategic groups or other targets may be more successful. Each path deserves its own explication.
Examples of the paths: Path 1: focus on rivals within industries
Applying blue ocean strategy to new product development: The first tool, the strategy canvas, would help with ordinary new product development since it focuses on product attributes, the competition, and consumer preferences. However, using the strategy canvas alone will not provide uncluttered market space. Firms that remain within their original industry boundaries may gain temporary competitive advantage from their insights. Inevitably, competitors will initiate changes and erase the advantage.NPD teams seeking new product ideas may find it valuable to apply six paths analysis before they develop any ideas. By studying market conditions including consumer preferences and the actions of competitors, they may be able to select targets that will avoid competition. Thus, applying blue ocean concepts early in the process promises the potential of a lower product failure rate. The value of the approach is enhanced by the fact that the six paths cover all the possible options. They provide a palette of strategic choices that can be used in a variety of situations. Most important, they free teams from the destructive market share battles and cannibalistic line extension trap. To accomplish this change, companies must supply structures that encourage this approach along with resources and reward systems that free managers from the old industry focus. Used for product development, the six paths could lead to a distinct product assortment, aimed at different segments and industries, and transform functional products into emotional-experiential ones. The product portfolios might be unlike those in use today. It must be emphasized that traditional product development techniques still have their place in insuring a successful outcome. They, in combination with picking the right targets will offer a measure of value that should not be ignored.
Beyond the stage-gate process: Whether it is performed explicitly or implicitly, new product development rests on the foundation of the 5C's: customers, competitors, company skills, collaboration, and context. Product development teams consider these factors in choosing target audiences, protecting themselves against visible competitors, and capitalizing on their strengths. Including these factors makes the NPD process systematic and, as far as it goes, comprehensive. However, one focus missing from the process seems to be the goal of achieving uncrowded market space. Blue ocean strategy has shown its value in numerous industries. If the NPD process focused on satisfying customers in a way that avoided competitive challenges, the resulting products and services might have a higher probability of success. The success would be more persistent allowing the first movers to satisfy customers, build brand equity, and buttress their gains against competitive attacks that might come later.Incorporating a blue ocean perspective into the new product development process would yield the possibility of increasing the success of product introduction and long-term success. It could be depicted by an enhanced model, which could be termed the strategic opportunity product development model. The model, depicted in Figure 5, involves an explicit market analysis activity before the discovery stage of the existing stage-gate model. The analysis would be more than market assessment, which really focuses on the numbers or needs of consumers. It would explicitly analyze competitors and their products. It would plumb the true voice of the customer to generate new product and service offerings that might transcend current competitors. Since, in total, the six paths encompass the range of actions that could lead to a blue ocean solution, they provide a comprehensive framework to guide action. Each of the pathways allows product developers to choose the appropriate combination of the four actions to create a product relatively free from competitors. To illustrate, the combination of four actions and six pathways is labeled the strategic opportunity product development matrix.The extended model provides a measure of competitive avoidance. By navigating the competitive landscape using the foresight it provides, product developers can evaluate opportunities more strategically. Instead of aiming for the low hanging fruit defined by their current strengths and level of resources, marketers can assess new product opportunities with an extended view that focuses on robust strategic advantages. They can avoid otherwise promising product ideas whose benefits are too easy to duplicate.
The strategic opportunity product development matrix/four paths - six actions: The idea of adding a blue ocean related stage to the traditional product development model is easy enough to express. Mapping how it would operate takes a some explication. Marketers may find the simple device of combining the six paths along with the four actions into a matrix (see Figure 6) helpful in clarifying appropriate responses to particular competitive situations. One of the examples used above for Curves illustrates how to use the matrix. Curves' management approached the blue ocean decision in several stages. First, it found an underserved pocket of consumers, women, as it scanned the strategic groups within the industry. Second, a deeper investigation revealed the product and service elements that did not serve the target consumers. The third stage involved identifying exactly what elements needed to be eliminated, reduced, raised, or created. Finally, management needed to deliver the new product and market it.It is noteworthy that the matrix does not have "boxes" in the form of the lines typical of spreadsheets. When marketers scan for and find blue ocean opportunities in one of the paths, the resulting four actions may have connections to different paths. When Curves found women who were underserved, and eliminated, or more properly excluded, males from the product, its actions focused within the strategic group path. However, reducing the quantity and variety of machines, amenities and the length of the workout, could be considered linked to the scope of products and services path. By doing so, it raised the value of the workout. Similarly, creating the female empowered environment seems linked to a positive emotional connection.Thus, marketers have a tool that can help craft a product, service or product/service bundle that can help them avoid the consequences of competition.
Summary: The paper reviews existing literature and proposes a single modification to the new product development process. While some companies, including those cited above, use the enhanced model, the marketing literature reports little use among firms that generate new products and services. For those firms that have not adopted the blue ocean perspective, adopting one is not impossible. As shown by the examples that are in the literature, adopting a blue ocean perspective offers the possibility of avoiding the competitive morass seen in numerous industries. Competing in uncluttered market space offers the great advantage of being able to execute plans without having to react defensively to actions of others. Even if competitors' moves do not block a company's actions completely, they may create obstacles that sap resources and impede success.Implications for product developers
|
Questioning worth: selling out in the music industry
|
[
"Marketisation",
"Authenticity",
"Worth",
"Inspired worlds",
"Musicians",
"Selling out"
] |
Summarize the following paper into structured abstract.
Introduction: At a time when society craves authenticity, the notion of selling out presents significant challenges (Beverland, 2005b; Hede et al., 2014; Hietanen and Rokka, 2015). This is pertinent when consumers yearn for alternatives to the offerings created by capitalist social relations of production or institutional arrangements controlled by capitalism (Edwards, 1975; Marx, 1967). In such contexts, consumers may draw upon the inspired world of uniqueness, morality and independence to interpret the worth of an artist and reject influences of the market world that signify capitalist social relations (Boltanski and Thevenot, 2006). We examine such a situation in which consumers stigmatise as "sell outs", artists who are marketed under the influence of capitalist social relations of production. We contribute to the debate on authenticity by attending to the question of worth that is under-examined in existing literature, by drawing upon French pragmatic sociology with specific attention to convention theory to understand conflicting interpretations of worth.
Theoretical background: Authenticity has been explored in a variety of contexts ranging from brands (Beverland, 2005b, 2006) and celebrities (Moulard et al., 2015; Preece, 2015) to music genres (Strand, 2014). Benjamin (1968), posits that the location of an original piece of art in a particular time, space and tradition contributes to its aura, which is necessary for authenticity. A reproduced artwork is never fully present without an original and leads to a desire for authentic experiences. Miller (1995) suggests that capitalist social relations create a rupture between production as an authentic ideal and mass consumption. As a result, consumers are forced to overcome this schism by seeking authenticity in commodities that create a sense of rupture in the first place. Fine (2003, p. 153) asserts that the desire for authenticity "occupies a central position in contemporary culture. Whether in our search for selfhood, leisure experience, or in our material purchases, we search for the real, the genuine". This quest for authenticity manifests as a particular desire for real, true and/or genuine experiences that consumers try to situate in everyday objects of mass consumption (Beverland and Farrelly, 2010).
Method: This study utilised an interpretive research design to explore both the internal and peer-to-peer debates fans go through when considering artists. Twenty-two in-depth interviews allowed the expression of non-conformity, as quite often in the music industry the type of music one likes or believes to be authentic is judged as a reflection of that person's character (Beverland and Farrelly, 2010). After initial analysis of the interviews it was evident that selling out often occurs in the context of peer-to-peer discussions. Two group discussions with 20 participants to unpack the interpretation, debate and group negotiations surrounding artists was undertaken. Participants all identified as fans of a variety of rock artists and a cross section of perspectives was canvassed (see Tables I and II) (Table III).
Findings: Three themes emerged from the data: "Authenticity and Worth in the Inspired World", "Selling Out as Loss of Worth" and "Signifiers of Selling Out". These themes help us understand the dimensions of market and anti-market forces, along with individual artists' morality and politics that lead to consumer perception of selling out.
Discussion: We investigate the notion of worth that shapes consumers' decision-making, considering authenticity and selling out in the context of contemporary music artistry. Exploring the clash between the inspired and the market worlds with impact on the world of fame helps us understand the role of worth and how it is valued by consumers in an increasingly marketised space. We identify issues of change of artistic expression in music, image, behaviour and originality due to marketisation, which drives a shift from an anti-capitalist to a market ethos in artists' motives and integrity. This results in a loss of worth by the musical artist, and therefore, the perception by consumers that the artist has sold out. The key role that worth plays in this phenomenon amid the tensions among the inspired and market worlds and the world of fame is one that has not received explicit attention in prior marketing literature. French pragmatic sociology and convention theory also aids in understanding authenticity and worth.
Implications for marketing theory and practice: We extend the discussion beyond music discourse to the general marketing arena surrounding human brands. As professional athletes in the sports world must span both professional athletic (inspired world) and commercial domains (market world), accusations of selling out naturally follow. In applying the findings of this current paper we would expect sports fans to use similar cues when assessing an athlete's authenticity and changes in behaviours may bring authenticity into question. Consumers or fans may question whether an athlete has sold out and consider the athlete's motives in such verdicts. One possible illustration of how this authentication model in music discourse has application is to consider the sporting sector of skateboarding (Hawk and Mortimer, 2002, 2008). Denunciations of selling out have been linked to skate board professional Tony Hawk (CEO of Tony Hawk Inc). and his multimillion-dollar endorsement deals and products. Fans questioned Tony Hawk's extension of his name and brand to the video game arena with Tony Hawk Pro Skater and the accompanying perceived loss of worth as he purposefully moves from the anti-establishment ethos found in the inspired world of skateboarding subculture to the capitalist ethos of the market world.
|
Social media monitoring: aims, methods, and challenges for international companies
|
[
"Social media",
"Monitoring",
"International companies"
] |
Summarize the following paper into structured abstract.
1. Introduction: This research paper aims at clarifying social media monitoring from the perspective of international companies. A systematic literature review was used to seek current insights on the methods used, and so illuminate not only the benefits but also the difficulties attached to the monitoring process.
2. Method: This exploratory paper is based on a systematic literature review of peer-reviewed journals. It aims at elucidating the use of social media monitoring or tracking and what benefits and difficulties this may entail for international companies. In the databases of EBSCOhost and ProQuest, after several tryouts, the keywords "social media" and [company or organization] and [monitoring or prognosis or metrics or tracking or analytics] were selected. The search spanned a 10-year period, 2003-2013, although the earlier years yielded no results, and was restricted to peer-reviewed papers. The search results were copied in RefWorks software for further analysis. After checking the title and abstract to see if they matched the keywords, and confining the search to papers in English, 38 articles remained. These articles were read thoroughly with the further selection criterion that they discussed monitoring or tracking of social media interaction by companies or other organizations. This resulted in a sample of 30 articles, published during the years 2008-2012 (these titles are marked * in the reference list) (Table I).
3. Findings: The findings are presented below. In each section, the findings are presented and summarized.
4. Conclusion and discussion: Social media have apparently allowed users and companies to experience the magic of interaction. Traditional advertisements in TV or magazines are not able to incorporate user experiences, and the performance of feedback is lower than in social media. The unique characteristics of social media allow companies to capture information with immediacy (Billington and Billington, 2012). Therefore, the new integral services for social media monitoring need to provide output in real time (Hipperson, 2010), for example, sending alerts when words of interest are noted (Rappaport, 2010).
|
A risk analysis model for mining accidents using a fuzzy approach based on fault tree analysis
|
[
"Risk analysis",
"Fuzzy logic",
"Fault tree analysis",
"Chrome mining"
] |
Summarize the following paper into structured abstract.
1. Introduction: Upon evaluation of the long-term accident statistics, it is inferred that the mining sector carries the risk of accidents above average when compared to the other sectors (Azapagic, 2004). Because the mining sector includes many high-risk activities, revealing that some features such as environmental conditions with a significant presence of humidity, dust or falling rocks have an important influence on the number and severity of accidents or incidents (Sanmiquel et al., 2015), maintenance of the mining processes uninterruptedly and safely is an important issue. For an acceptable level of risk in mines, it is crucial to specify all the accidents and incidents comprehensively and observe the causes of the undesired events carefully. In this way, the priority fields might be specified and the improvement plans might be constituted to minimize the risk in mines. One of the systematic methods for analyzing the cause of risks is the fault tree analysis with graphic expression by adopting a deductive method, in which a specific risk is only qualitatively recognized from a proper primary system (Hyun et al., 2015).
2. Literature review: There is a crucial relationship between the hazards, risks and the accidents. The risk is the effect of uncertainty on objectives caused by variability and specific uncertain events, and it is often measured concerning consequences and likelihood. Hazards are the prerequisites for risks, and when all the hazards are safely controlled, there is no risk for unwanted events (Liu et al., 2015). The focus of risk analysis concentrates on evaluating the undesired events that have hazardous conditions by considering their occurrence probabilities and possible consequences. In this way, it might be possible to manage them by constituting the control measures and prevent or mitigate the risk situations. And because of the limited resources for managing the risk in an organization, it is crucial to find out the underlying events regarding its occurrence frequency and possible consequences.
3. Methodology: In underground mines, there are a considerable number of hazards which include specialized equipment, humidity, rock stresses, dust and harmful gasses. These hazards have the potential to trigger accidents that can lead to injuries, multi fatalities and/or major asset losses unless risk control measures are implemented that effectively manage them (Liu et al., 2016). To perform a risk analysis by specifying the risky situations that vary across mine types, the variety of geological, managerial and technical infrastructures, a methodology is developed within the scope of the study. And when a high risk is considered in the mining activities, such kind of approach and methodology which is able to reveal the all possible accidents and incidents in a mine is inevitably necessary to generate the right risk mitigation strategies.
4. Application of risk analysis methodology for an underground chrome mine: The proposed methodology within the concept of the study is implemented in an underground chrome mine located in the middle of the Turkey. Since the past accidents' data are limited to mining firms, to provide the accuracy of the analysis, just activities in underground loading and conveying processes of the mine are handled to perform risk analysis. According to the methodology, the four stages of risk analysis are presented as following.
5. Conclusion and recommendations: Important technological innovations beginning from the seventeenth century have pioneered significant developments for the sustainability of mining (Suppen et al., 2006). Stages of use of high technology for mining activities in the world are mechanization, remote steering systems, automation and robotization (Kizil et al., 1995). There are also some studies that consider the mining sector with the advanced automation technologies (Boudreau-Trudel et al., 2014, 2015; Bellamy and Pravica, 2011). However, operations of the mining sector in many countries are maintained by small-sized enterprises and by labor-intensive activities. In all mines, there are many hazardous conditions need to be analyzed for preventing and mitigating them. No matter how severe and probable they are, preventing the underground mining accidents and incidents is one of the most important objectives of mine administrators. It is inevitable to urge upon the issues such as necessary audits, accident preventing systems and appropriate technology for the avoidance of occupational accidents and physical injuries in the mining sector (Paul and Maiti, 2007; Maiti et al., 2004). To reduce and prevent the occurrence of the accidents and incidents, reasons of these undesired events have to be understood and mastered fully to provide a reference for further corrective measures (Jiang et al., 2012).
|
Foresight and futures in Europe: an overview
|
[
"Forward planning",
"Europe"
] |
Summarize the following paper into structured abstract.
Introduction: Europe has a long history of futures work or foresight, as Masini (1978) outlining the early development of the field shows. From the foundation of "Prospective Studies" by Berger in the 1950s; through the work of de Jouvenel and the foundation of the journal Futures in the 1960s; the Club of Rome, the Swedish Secretariat for Futures Studies and others in the 1970s; Europe provided an important contribution to the expansion of futures work during that period. Paralleling this work in the west, countries in the eastern part of the, then, divided continent were also involved in a variety of futures work as Novaky et al. (2001) show. During the 1980s in a number of European countries there appears to have been a reduction in activity but from the 1990s on a major resurgence has taken place. This paper based on a survey undertaken for the Foundation for the Future, the State of Play of the Futures Field (SoPiFF) in mid-2007, with minor updates in April 2009, attempts to provide an indication of the scale of futures/foresight work in Europe. Details of the SoPiFF project including its purpose and methodology are given in Slaughter (2009) in this volume, and for the SoPiFF database, on which this paper is based, see SoPiFF (2008).At the outset it should be made clear that the scale of this survey was limited by the resources available and consequently it relied almost exclusively on web-based material readily available to the individual researcher. Although some translation was possible the diversity of languages in Europe was not represented in the material surveyed, the majority of which was in English. Practically all European Union work and many national foresight studies throughout Europe are available in English as well as in the original language but a larger multi-language survey would undoubtedly reveal much more work and provide a much more balanced view of the field.
Foresight in the European Union: The European Union is a unique institution bringing together, at the time of writing, 27 member states. The EU has been a significant force in the development of Foresight at European scale but it has also encouraged parallel developments at, national, regional and local scales, through, for example, the funding of projects that bring organizations in several member states together. As Colson and Corm (2006) note, the EU has also encouraged new member states, many in the former eastern bloc, to develop Foresight capability as part of the accession process.Some idea of the scale of foresight activity in Europe can be gained from the European Foresight Monitoring Network[1] that "monitors ongoing and emerging foresight activities," within the EU and elsewhere. At the time of writing (April 2009) the EFMN website lists 1,916 foresight initiatives, 160 briefs and 124 other documents. It is part of a series of initiatives, including Forsociety[2] and Forlearn[3], that aim to provide a knowledge sharing platform for foresight practitioners and policy makers in the EU.The EFMN website provides a searchable data base of foresight activity not only in Europe but worldwide. The initiatives can be searched by basic details, such as title and country; a series of drop down menus under research area, industry, market, audience, output and sponsor; and a further drop down menu that includes 31 different foresight methods. Foresight Volume 10, Number 6, 2008 contains a number of papers based on the information contained in EFMN (Butter et al., 2008). The network was initially funded by the EU's Directorate General for Research for a four-year period 2005-2008 and extended for a further three years in late 2008. It was set up, as many EU projects are, in response to a call for bids from consortia of organizations in different member states to undertake the work. This process has led to the production of several reports and the development of foresight capability in a number of organizations across the EU, many of which have taken part in a number of such projects, but it does not guarantee continuation of any project beyond the initial funding nor the continued involvement of the organizations in Futures work once the project is completed. Most of these organizations, which include university departments, and research institutes, both non-profit and commercial, are not exclusively devoted to foresight but have developed the capability alongside their other work.Foresight has also become established within the institutional structure of the EU. The European Parliamentary Technology Assessment (EPTA)[4] a network of parliamentary and other organizations includes foresight among its activities examining the impact of new technologies. Among the eight joint research institutes supported by the European Commission, the Institute for Prospective Technological Studies based in Seville plays a major role in foresight in the EU through the Foresight for the European Research Area (FORERA) team[5]. IPTS was involved in the development of, and hosts, the online guide to foresight, that is part of Forlearn[6], a continually developing resource for those wishing to undertake a foresight exercise. The guide outlines the reasons for doing foresight and the issues involved in setting up, running (including guidance on methods), and following up a project. IPTS has also hosted three international seminars on future-oriented technology analysis[7]. For a time there was also a special science and technology foresight unit with the Directorate General for Research[8], but this no longer exists (Butter et al., 2008) and its website has not been updated since August 2007.Although there is futures work in other areas, the term foresight in the EU has most often been related to science and technology, which both the commission and member governments have regarded as crucial to future economic development and prosperity in the global market place. This was reflected in the survey on which this paper is based mainly because there is a large amount of such work within Europe and because extending the scope beyond foresight in this sense would have necessitated greater resources and encountered definitional difficulties in deciding where the boundaries of the survey should be drawn. Despite these limitations some examples of futures work in other areas were included and the EFMN database, for example, does include references to environmental and sustainable development themes. It is also true that the methodological work of IPTS and others, which has been funded by the EU, is applicable to areas other than science and technology. One funded project that focused specifically on methodology and ran for three years from 2004-2007 was COST Action 22: Advancing Foresight Methodologies. It brought together a group that included individuals with backgrounds in futures and sustainability and the environment[9].One policy area where the term has been used, alongside more traditional forecasting, is transport. The Foresight for Transport (2004) study, for example, used a number of futures methods and offered three contrasting scenarios for the future of transport in Europe:1. A reference scenario (most likely) that is based on the continuation of past trends.2. An ideal scenario (most desirable in the opinion of the group producing the report), that, "would require a paradigm shift in values ... towards environmental sustainability where ecological concerns and objectives determine economic goals and strategies."3. A negative future state scenario (most feared) in which governments are unable to deal with negative environmental developments.Other European agencies are also involved in looking to the future. A report from The European Environmental Agency (2007), for example notes that a wide-range of forward-looking studies relevant to the pan-European scale have been produced but that few relate to the environment. Lists in the appendix, divided by the geographical area covered, into global; western and central Europe; south eastern Europe; and eastern Europe, Caucasus and central Asia; provide details of a large number of studies with relevance to Europe. However, "the review of forward-looking assessments within the region also highlights a number of gaps, including weak coverage of environmental concerns, recurring problems of methodological soundness, reliability, information gaps, and a lack of direct relevance to priority policy issues." This suggests that futures work in this area had not advanced significantly since the earlier study for the EEA which attempted to develop, "an inventory of scenario studies relevant in the context of sustainable development for Europe" (European Environmental Agency, 2000).
International organizations based in Europe: Europe also hosts a number of international organizations involved in futures work, including the OECD Futures Programme[10], the International Institute for Applied Systems Analysis (IIASA)[11], and the UNIDO Technology Foresight Programme[12]. Each of these organizations has its own focus of futures work; OECD concentrating on issues of concern to member governments including studies on migration, infrastructure to 2030, and risk management; IIASA on three core themes of environment and natural resources, population and society, and energy and technology; and UNIDO on technology foresight in a program intended to assist the transition of former eastern bloc states to a market economy. One of the resources available is a technology foresight manual that includes guidance on the use of foresight methods (UNIDO, 2005).
Foresight at the national scale: The close relationship between Foresight in the EU and within its member states is emphasised by the ForSociety ERA-Net[2], which, "is a sustainable and dynamic network, where national foresight program managers co-ordinate their activities and - on the basis of shared knowledge on relevant issues, methodologies, legal and financial frameworks - regularly develop and implement efficient trans-national foresight programs that significantly enrich both the national and the European research and innovation systems." Although the objectives of the network include the development of, "a European foresight culture, which strengthens future oriented thinking among policy makers and enhances the participation of all relevant stakeholder groups in the decision-making processes," the focus is clear, the partners in the network being public organizations responsible for research and technological policy making. Most European countries have undertaken national foresight exercises in some form.In a paper concerned with the evaluation of national foresight activities in Europe, Georghiou and Keenan (2006) emphasize the focus on science and technology in listing some of the most commonly stated goals for such exercises:* exploring future opportunities so as to set priorities for investment in science and innovation activities;* re-orientating the science and innovation system;* demonstrating the vitality of the science and innovation system;* bringing new actors into the strategic debate (in science and innovation policy); and* building new networks and linkages across fields, sectors and markets around problems.This leads them to quote a definition of foresight that has emerged from international collaboration in the field which again focuses on science and technology. The definition does, however, also emphasize the participatory nature of the process that has developed:The foresight process involves intense iterative periods of open reflection, networking, consultation and discussion, leading to the joint refining of future visions and the common ownership of strategies, with the aim of exploiting long term opportunities opened up through the impact of science, technology and innovation on society ... It is the discovery of a common space for open thinking on the future and the incubation of strategic approaches (Cassingena Harper, cited in Georghiou and Keenan, 2006).National foresight programs in Europe, from the initial German study on, have most often been inspired by the belief that the economic performance of the nation in question has not kept pace with that of other countries. Most notably in the 1980s and 1990s Japan was seen as performing much better economically than many European countries and the view developed, supported by research by Irvine and Martin (1984), that the Japanese Delphi Exercises which brought government, industry and the research community together in a foresight program, played an important part in this. So significant was this that the first German exercise even used the Delphi questions from the latest Japanese exercise and the first UK Program sought to bridge the perceived gap between the science base and industry through a government led initiative. In consequence the programs tended to be established within departments responsible for industry, science and technology, or economic policy, and these departments became to be seen as responsible for foresight.The experience of the second cycle of the UK foresight program provides an interesting illustration. The first program that ran from 1994 to 1999 was firmly within science and technology, which was seen as providing the most promising route to solving the problem of low growth in the British economy. The program was based around a series of expert panels, drawn from business, the research community and government focused on a range of economic sectors. Perhaps, in part, because of the change of government in 1997, the scope of the second program was extended to include "quality of life" issues. Although the panels were again mainly focused on economic sectors, environmental issues were included and two of the three cross-cutting thematic panels were concerned with the ageing population and crime prevention. Alongside these were a series of task forces and an ambitious information resource called the knowledge pool. Not long after the reports of the panels were completed a review of the foresight process was instigated at the request of the Science Minister in whose department Foresight was located. It concluded that:The current objectives were considered to be too broad: foresight ... should focus on science and technology, to identify new or disruptive technologies that are likely to have major impacts (Miles, 2003).Whether this was the result of political dissatisfaction with the recommendations of some panels or concern about departmental boundaries, the second cycle was replaced by a scaled down, clearly science focused, project based, third cycle.This is not to imply that futures work at the level of national governments in Europe is limited to science and technology. In the UK Horizon Scanning has become a significant focus of Futures activity in government with several departments, including DEFRA, the Department for Environment, Food and Rural Affairs[13] and the Health and Safety Executive[14], being involved. The Office for Science based within the Department for Innovation, Universities and Skills runs both the Foresight Program and the Horizon Scanning Centre[15]. The Centre has commissioned two major scans, Sigma and Delta, that can be accessed from its website, and facilitates the Futures Analysts' Network (FAN) Club:The Sigma Scan[16] is a quality assured synthesis of some of the world's best Horizon Scanning sources. It covers future issues and trends across the full public policy agenda. It is drawn from a range of sources including think tanks, academic publications, mainstream media, corporate foresight, expert/strategic thinkers, government sources, alternative journals, charities/NGOs, blogs, minority communities, and futurists.The Delta Scan[17] is an overview of future science and technology issues and trends, with contributions by over 200 science and technology experts from the worlds of government, business, academia and communication in the UK and US.The FAN Club is, "a forum where those who have an interest in horizon scanning and futures analysis can meet to exchange new ideas, innovative thinking and good practice"[18].Some idea of the extent of UK Government futures activity in late 2008 can be gained from a two page summary by the consultancy Outsights, UK Government Futures (Outsights, 2008) outlining the work that the firm has carried out and listing twelve departments that now use futures thinking.The government also funds organizations such as the Sustainable Development Commission[19] and the Royal Commission on Environmental Pollution[20] both of which undertake studies relevant to the future. In February 2008 the Strategy Unit of the Prime Minister's Cabinet Office published a discussion paper, "Realising Britain's potential: future strategic challenges for Britain"[21], the aim of which was, "to establish an evidence base on which policy makers can build future strategy." By mapping key trends and drivers of change; identifying public concerns and expectations; Scanning; and analysis of Britain's current strengths and challenges; the study identified nine key challenges, including globalization and climate change. The document may not question the continuation of the current paradigm but it does, at least, attempt to encourage a forward look.Such forward looking reports have been produced by Wetenschappelijke Raad Voor het Regeringsbeleid - WRR, the Netherlands Scientific Council for Government Policy on a regular basis since its establishment in 1972[22]. "The aim of the WRR is to advise the government about future developments ... On the basis of scientific knowledge, all kinds of preconceived assumptions are subject to discussion, possible alternative policies are analyzed, and solutions with an eye to future developments are presented." The focus of the council's activities is issues of long-term future importance to government. A book, published at the time of the 25th Anniversary of WRR (Scientific Council for Government Policy (WRR), 1997) classified the reports the council had prepared under the following headings: futures studies; economy and technology; demography; employment; social security; income distribution; education and culture; health and welfare; environment and spatial planning; and international relations; indicating the wide range of topics it had considered.At the political level, however, one country, Finland, was until 2006 unique in Europe. The Finnish Parliament Committee for the Future was formed in 1993 and made permanent in 2000. It has 17 members from different parties and is one of the Parliament of Finland's 15 standing committees. It has a wide ranging remit covering issues and research related to the future including energy policy, regional policy, GM crops, the impact of ICT on older people and climate and energy[23]. In 2006 the Scottish Parliament, part of the devolved system of government in the UK, created the Scottish Futures Forum with the aim of widening participation, challenging policy and increasing the ability of members of the parliament and the wider Scottish community to consider future challenges and opportunities. The forum brings together parliamentarians, academics, civil servants and business leaders. In 2007 the UK House of Commons Public Administration Select Committee suggested that the UK Parliament might also form a futures forum. A Parliamentary Office of Science and Technology Postnote on Futures and Foresight in response to this suggestion was published in May 2009 (Parliamentary Office of Science and Technology, 2009).
Regional foresight: The term region is used in two distinct and potentially confusing ways both of which have been used in the European context. It is used to refer both to a group of countries such as those surrounding the Baltic Sea and at a smaller scale to refer to subdivisions of one country. One example of regional foresight in Europe incorporated both of these definitions being concerned with an area in the western Baltic that brought together parts of Denmark, Sweden and Germany to examine common conditions, options and challenges for the future[24]. At the sub-state regional level the EU has funded a number of foresight projects, again mainly focused on economic development, including the FOREN project and FUTURREG[25].The FOREN Project involved 26 partners lead by a team from Spain, France, Italy, and the UK which produced a Practical Guide to Regional Foresight that was made available in several of the languages of the EU. The published document provided guidance on how Foresight can be used in regions and included a number of examples of its use in Spain, Finland, France and the UK, a bibliography and a list of useful websites. The UK Guide was edited by Miles and Keenan (2002) at Manchester University one of the partners in the project. The German version, Praktischer Leitfaden fur die regionale Vorausschau in Deutschland also included an extensive list of over 40 foresight studies at the regional and local level in the Federal Republic (Zweck, 2002). The FUTURREG Project again brought together partners across the EU, from Cardiff University, UK; URENIO, the Urban and Regional Innovation Research Unit, Greece; The Destree Institute, Belgium; ADER, The Economic Development Agency of La Rioja, Spain; The Malta Council for Science and Technology; The Finland Futures Research Centre; and The Border, Midland and Western Regional Assembly in Ireland. FUTURREG is designed to have significant long-term impacts for regional development policies, especially by ensuring that policies - and regional development organizations - are informed by high-quality futures tools and participatory processes, including environmental scanning, trend analysis, scenarios, visionary management and Delphi. The projects aims are to:* develop a common futures toolkit, that can be used in all EU regions (available on the website);* increase the use of futures tools; and* apply the futures toolkit to regional development issues.There are also many examples of foresight in the individual regions of the EU as can be seen in the references of the German edition of the Practical Guide noted above, Kaivo-oja et al. (2002) which provides details of studies in Finland, and in entries on the EFMN website. In 2007 Yorkshire Futures, part of the Regional Development Agency for Yorkshire and the Humber, commissioned Henley Centre Headlight Vision to prepare The Future of Yorkshire and Humber: Trends and Scenarios to 2030 (Henley Centre Headlight Vision for and with Yorkshire Futures, 2008). Although prepared for a regional development agency with the economy as its central concern this study takes a broad view of 51 drivers of change that were integrated into six major dimensions: community make up; climate change and energy; economy and skills; transport and housing; health and well-being; and society and inequalities. These are then used to develop four scenarios for the future of the region:1. The most plausible, that represents the continuation of existing trends, and was requested as part of the brief.2. Northern lights, based on the idea that northern England benefits from increasing dissatisfaction in the overcrowded and congested London area.3. Low carbon locale based on peak oil and legally enforced carbon emission reductions.4. Fragile seams in which inequality becomes acute and dominates social and economic policy.In commissioning the study Yorkshire Futures intended that the scenarios should be used to facilitate discussion within the region and the report contains guidance for, "policy makers, partners, stakeholders and other interested parties," on using the scenarios. Yorkshire Futures also convenes an occasional series of discussions on futures issues relevant to the region.
Companies: For commercial reasons companies that undertake foresight often keep their work confidential but there are several European firms that are known to be involved. Best known for their scenario work that began in the 1970s is Shell[26]. In introducing the most recent Scenarios for 2050 the chief executive said:By 2100, the world's energy system will be radically different from today's. Renewable energy like solar, wind, hydroelectricity, and bio-fuels will make up a large share of the energy mix, and nuclear energy, too, will have a place.He is optimistic that, "humans will have found ways of dealing with air pollution and greenhouse gas emissions. New technologies will have reduced the amount of energy needed to power buildings and vehicles," but the two contrasting scenarios, Scramble and Blueprints, suggest that the future may not be easy:Scramble. Like an off-road rally through a mountainous desert, it promises excitement and fierce competition. However, the unintended consequence of "more haste" will often be "less speed," and many will crash along the way.The alternative scenario can be called Blueprints, which resembles a cautious ride, with some false starts, on a road that is still under construction. Whether we arrive safely at our destination depends on the discipline of the drivers and the ingenuity of all those involved in the construction effort. Technological innovation provides the excitement[27].The purpose of company Foresight is, not surprisingly, focused on the future success of the company concerned. The Siemens Strategy and Vision program makes this clear, its purpose being to "to identify promising technologies and future consumer wishes and to discover new business possibilities"[28]. The studies involved can, however, be quite wide ranging as the examples of Philips and BT indicate:Philips Design Probes is a dedicated "far-future" research initiative to track trends and developments that may ultimately evolve into mainstream issues that have a significant impact on business. The Probes generate insights from research in five main areas; politics, economic, culture, environments and technology futures. With the aim of understanding "lifestyle" post 2020, the program aims to identify probable systemic shifts in the social and economic domains likely to affect our business and create intellectual property in new areas. It challenges conventional ways of thinking to come up with concepts to stimulate debate. Deliverables range from scenarios and narratives to the creation of experience prototypes and IP fortressing[29].The (BT) timeline encompasses all areas of life influenced by technology developments including artificial intelligence, health and medical, business and education, demographics, energy, robotics, space, telecommunications and transport and travel[30].A recent commercial development based in the UK but providing a service to businesses and the futures community worldwide is shaping tomorrow[31]. This web-based service provides subscribing members with a wide range of futures relevant information and networking capabilities. Further information on Corporate Foresight in Europe can be found in reviews by Becker (2003) and Daheim and Uerz (2006).
Consultancies, research institutes and think tanks: The growing interest in foresight has encouraged the development of an increasing number of institutes, think tanks, and consultancies, some specializing almost totally in futures work and others having foresight as part of their portfolio. Some such as the Copenhagen Institute for Futures Studies[32] and Futuribles[33] have been operating in the future field since the 1970s while others have been formed to serve the increasing demand for foresight in the last 15 to 20 years. France, Germany, the Scandinavian countries and the UK appear to have been particularly fertile ground for both firms and individual consultants in the field, although as most tend to work mainly within their host country and in the national language this survey has probably underestimated the numbers in most countries.Within the limitations of this research it was for example possible to find websites for at least 20 consultancies in the UK, several operating as individuals, but also including organizations such as the think tank DEMOS[34] where futures forms part of a wide range of research, and the Forum for the Future[35] which focuses on sustainable development. Also worthy of note are the New Economics Foundation[36] that, "aims to improve quality of life by promoting innovative solutions that challenge mainstream thinking on economic, environment and social issues," and the Tomorrow Project[37], "an independent, registered charity undertaking a program of research, consultation and communication in Britain about people's lives - what's been happening so far, what will shape the years ahead, and what we need to think about."
Universities: University departments in many European countries have developed foresight capability. Most of this has been in research within existing departments or institutes often funded by European or national government programs. Among the relatively few that have established a separate identity is the Department of Strategic Foresight that belongs to the economics and management science school at the Conservatoire National des Arts et Metiers in Paris (CNAM). Under the direction Professor Michel Godet the department has become a major centre for strategic and regional foresight research and the home of the Laboratory for Investigation in Prospective, Strategy and Organization (LIPSOR)[38]. More recent developments include the Futures Academy at the Dublin Institute of Technology in Ireland and the James Martin Twenty-First Century School at the University of Oxford[39]. Focusing on the idea that the twenty-first century will be an unusually challenging one the aim of the school is to stimulate research, "by giving the university's scholars the resources and space to think imaginatively about the problems and the opportunities that the future will bring."Although some departments offer some teaching in futures, education and training in foresight within the university sector remains undeveloped with Finland again providing the exception. The Finland Futures Academy (FFA) is a network of 17 Finnish universities which offers basic academic training and co-ordinates the national postgraduate program in futures studies. The leading institution in this program is the Turku School of Economics and Business Administration where the national Masters program and graduate school in futures studies is based[40].
Conclusions: Foresight and futures work in Europe has grown significantly in the last 20 years. Quite why this should have been so is beyond the scope of this paper but it may in part have been a reaction to the short-termism that dominated the UK in particular during the 1980s; the growing awareness of longer term environmental concerns such as global warming; and as is clear, again in the UK at least, concern that the economy was performing poorly in comparison with some other countries and belief that science and technology offered a potential solution. Although there are examples that challenge the dominant paradigm the fact that the term, foresight, has been used mainly in the context of science and technology and the continuation of the current economic paradigm suggests that this has been a major factor in its growth. Parallel futures work in sustainable development, climate change and other environmental areas, for example, has generally not come under the foresight heading and as that was used as the main focus of the work on which this paper is based it is undoubtedly underrepresented here. Further research in these areas would help to clarify how far the futures approach has penetrated into these areas that are clearly of concern to the future but which appear to have developed in parallel with the recent growth of foresight rather than being integrated with it.A large amount of government based or funded work takes place within the European Union and in individual member states. Although there are a number of small units within government such as the former Foresight Unit within the EU Directorate General for Research, foresight work within the Institute for Prospective Technological Studies and, for example, in the DIUS in the UK, most of this work has taken the form of funded projects, run over a number of years by outside contractors, leading to the publication of reports, guides and toolkits. These have been undertaken by a growing number of university departments, research institutes and consultancies that have in the process developed capability in foresight and produced several guides in the use of futures methods and techniques. Although in terms of futures and foresight these are important developments they are small in relation to the activity of both the EU and its member states and it is difficult to assess how much influence they have had on the day-to-day work of government. It seems likely that foresight is seen mainly as an additional activity and although it can be argued that most, if not all, policy is concerned with the future assessing the extent to which futures perspectives have become part of mainstream policymaking would require further research.Despite this extensive practical experience there has been little development in the academic sector of theoretical understanding or education and training in futures. There is little real appreciation of what foresight can and cannot be expected to do, or of critical evaluation of the results, apart from some reviews of national foresight studies with the danger that unrealistic expectations of its capabilities will be disappointed and its value doubted. The growth of foresight and related futures work in Europe has occurred during a period of economic prosperity and growing public expenditure but it may not be sustainable as unrealistic expectations fail to be fulfilled and public spending in the recession is cut back and foresight seen as an expendable luxury. The last 15 years will then prove to be an exception rather than the beginning of the embedding of foresight into society as some, including the author of this paper, would wish it to be.
|
A micro intellectual capital knowledge flow model: a critical account of IC inside the classroom
|
[
"Bottom‐up",
"Collaboration",
"Intellectual capital",
"Knowledge flows",
"Micro intellectual capital",
"Social learning theory",
"Team‐based learning",
"Learning"
] |
Summarize the following paper into structured abstract.
1. Introduction: Universities are seen as a major contributor to the intellectual capital (IC) of both their region and their nation (Sanchez and Elena, 2006). Broadly, IC is the collection of intangibles which "allows an organisation to transfer a collection of material, financial and human resources into a system capable of creating value for the stakeholders" (European Commission, 2006, p. 4). This definition recognises that IC is found in all organisations. Previous studies have advocated that universities include their IC in their accounting information system (e.g. Corcoles et al., 2011) and focused on the university as a whole for operational management (Secundo et al., 2010) and external reporting of IC (Sanchez et al., 2009) as occurs in Austria. This is a top-down approach. In other words, university central administration defines and controls IC in terms of macro-accomplishments such as ranking or brand status. Understandably, this does not consider the value creating activity of teaching which is a core function in most universities. The objective of this paper is to give a bottom-up account by beginning with the needs, expectations and experience of students in a particular unit of study and de-emphasising aggregate outcomes. A micro-IC framework is proposed and used which can be applied in other higher learning teaching institutions. The connection between business and teaching is found in the observation from Ichijo and Nonaka (2007, p. 3) that "The success of a company in the twenty-first century will be determined by the extent to which its leaders can develop intellectual capital through knowledge creation and knowledge sharing on a global basis".The micro-IC model proposed is the result of applying the "critical" approach to management research (Alvesson and Deetz, 2000) to IC (Dumay, 2009). While this paper presents the basis and an example of the framework, it will also undertake the three tasks of critical approach to research (insight, critique and transformative redefinition) which produced it and which has been successfully applied in this area (Oliver and Coyte, 2010). The first task (insight) is that of appreciating the impact of IC practices on the people and controlling organisation is accomplished by providing a view from within the organisation. Universities as institutions of higher education are characterised by a variety of teaching activities which differentially affect developing IC in students. The second task (critique) examines goals, ideas, ideologies and discourses the structures that affect students. The goal is one of dominance. The teaching practices described in this paper are an attempt to promote IC rather than control or reinforce traditional teaching approaches. The third task (transformative redefinition) is concerned with enabling change and to provide skills for new approaches. The paper concludes with suggestions for micro-IC research as well as the need for classroom methods to evolve to provide IC with the characteristics defined above that business students will come to value after they have completed their studies and have to exercise business judgement.An immediate issue that confronts any micro-IC considerations of IC in the classroom are the demands from professional groups and business organisations. This amounts to recruiting "job-ready" candidates. Universities have in the past resisted some of these pressures on the grounds that the knowledge that students take with them has a half-life and therefore the task of the university is to provide exposure and practice in critical thinking (Readings, 1996), moral reasoning (Kohlberg, 1981) and in equipping students to understand the changing nature of society and business (Bok, 2003). This is a long standing view (e.g. Dewey, 1934; Ramsden, 2003). Consequently it is difficult to claim that the IC possessed by students is solely attributable to the knowledge gained from completion of the units of study that comprise an award. However, a micro-IC view suggests that it is the particular processes conducted in the classroom which develop IC. In particular, the processes associated with a critical approach that provide new intellectual experiences, more complex social interactions which the student cannot resolve in the ways to which they are accustomed, and the removal of a single correct answer, foster micro-IC. In this respect, the IC for students differs from IC for business. The commercial consequences of the judgements and managerial decisions in business that are apparent in business may not necessarily be noticed by the students themselves (Mason, 2002). Moreover, other IC factors may be dominant, for example, organisational change (Pawlowsky, 2001). In the well-managed classroom, students acquire IC and this may also include fostering a critical approach to management and management research.Despite universities encouraging students to reflect on their learning with a view to providing them with life-long capabilities of understanding, there are difficulties in applying this introspection to IC. It may be that the relentless week-by-week learning demands masks the IC which studying the topics generates. Or it may be that students find the analytical and synthesis processes of extracting and valuing their knowledge unobvious to them. This is developed in part two where the course structure is examined. The nature and value of reflection is well documented (e.g. Boud et al., 1985) but the evidence suggests it develops gradually (Schon, 1983) and depends whether students know the topic well, something that does not apply in the modern, modularised university curriculum. Therefore, it is not possible to rely on reflection and similar methods to assess IC. In this respect some of the traditional approaches for detecting and assessing IC, such as the promise of innovation (Sullivan, 1998), performance culture (Bollen et al., 2005) and value (Kannan and Aulbur, 2004), are unlikely to provide sufficiently granular information.Faculties in universities worldwide have been unable to resist pressures with regard to developing business-related generic graduate attributes that apply across a variety of jobs and life contexts in students and ensuring content relevance. In America, the debate has been lost (Kirp, 2003). In Europe, with EU research funding, there has been a considerable emphasis on practical outcomes (Banks et al., 1997) while in developing countries, the role of the university to stimulate economic growth from research is established (Kirkland, 2008). In Australia, the focus of the present study, the ALTC (2010) has been instrumental in relating generic skills to business discipline studies directly involving senior university academics and rolling out the change discipline by discipline. Academics in charge of units have therefore been confronted with a dual challenge, providing sufficient discipline specific knowledge and blending it with generic skills. Since generic skills encourage learners to be more reflective and self-directed (Hager et al., 2002), their use has influenced classroom activities and the assessment tasks. This is partly a function of the actual generic skills themselves, such as teamwork and co-operation, which cannot be fostered in a traditional lecture with assignments at home and closed book individually taken exams. This then is the background against which a knowledge flow approach to IC was adopted.
2. Theory development: a model for the flows of knowledge or information sharing: The development of theory for micro-IC in learning and teaching has three aspects drawing on the "insights" (Alvesson and Deetz, 2000, p. 17) revealed by a critical approach to management research. They are the categories of IC, the view of knowledge and the use of flows instead of stocks. This section then concludes with a model for building IC through the flows of knowledge or information sharing. This is theory development in the middle-range consistent with Merton (1949/1968) in being selective about concept couples (knowledge flow and knowledge stock) to describe social processes[1].IC itself provides three basic and closely interrelated categories for recognising the kinds of IC being built: the human capital of knowledge, competences, experience, know-how; structural capital comprising organisational capital and technological capital; and relational capital associated with customers and partners (Roos et al., 1997; Sveiby, 1997). In considering the categorical view of IC, there is the need to reposition it in relation to student knowledge and learning. The starting point is the taxonomies of stocks of IC discussed by Petty and Guthrie (2000) which have been agreed by consensus (e.g. Brooking, 1996; Edvinsson and Malone, 1997; Stewart, 1997). It is extended by identifying their place in university teaching and learning. In Table I, the scope column thus represents the traditional IC and the application to learning is a reinterpretation of that scope at the level of analysis of the student.It can be seen from Table I that the mapping of business IC to student learning IC does not reduce the workability of the framework. There is one difference between a business and university orientation and that is that for structural capital: the processes do not reside in the student. However, the impact of the processes on the student is so formal and inescapable it seems reasonable to allow that some of the processes are institutionalised in the student (e.g. the obligation for enrolment, submitting assessment and being graded).The second aspect is the two different views of knowledge as IC depending upon whether the concept of knowledge is taken as a stock or as a flow (Bontis, 1999). There is a body of knowledge studies (e.g. Augier et al., 2001; Dixon, 2000; King and Ko, 2001; Markus, 2001; O'Leary, 2001; Schultze and Boland, 2000; Swap et al., 2001) subsequently used in Table I, which theorise knowledge flows in their own right. By taking the flow approach (Cricelli and Grimaldi, 2008) to knowledge, it is possible to specify the flows that have occurred and gain a more immediate sense of whether the knowledge flow has contributed to IC. With regard to the classroom, as with business, it is often easier to make a judgement whether a knowledge flow has occurred rather than specify whether an improvement in knowledge stock has been realised. Finally, there is the connection with learning and teaching. The theoretical foundation is Bandura's (1977) social learning theory (SLT) because it suggests that people learn by observing both the attitudes and behaviour of others. SLT supports the IC approach because it suggests that learning from others is a de facto knowledge flow resulting in a knowledge stock. The observation itself may occur without being given particular attention and micro-IC may be created without there being an immediate end-use for the knowledge gained.The final aspect concerns using flows rather than stocks. The discussion of stocks and flows is somewhat simple in the literature (e.g. Bassi and van Buren, 2000; Dierickx and Cool, 1989; Johnson, 1999), with even recent discussion adding little (Nonaka et al., 2008), drawing on Argyris (1992/1999) that knowledge flows represent organisational learning which can be equated with IC using the argument that the flow comes to rest as a stock. The view adopted in this paper follows the wealth creation argument from Stewart (1997) resulting in a higher-valued asset. In this case the higher-valued asset is the student who becomes an employee with better potential to perform when they commence their new job (or better perform their existing job).The existing theories on knowledge flows (identified in Table II) are not applicable to micro-IC as they lose scalability when an attempt is made to apply them to individual knowledge flows in a non-organisational context without organisational performance. Even a multi-dimensional model, which combines theories (Nissen and Levitt, 2002) with an engineering perspective and a SECI model of knowledge creation foundation (Nonaka and Takeuchi, 1995) rely on business processes, is unable to do so as it resorts to a computational model. The conclusions drawn in the third column suggest a different approach and the findings of the current study are appended in the final column of Table II.SLT offers a lens to "critique" (Alvesson and Deetz, 2000, p. 18) building IC in two respects. First, it is well known in teaching that students pay closer attention to the teacher and the classroom than might otherwise be expected, so the observation can be a source of IC. By extension, SLT can be used to give students a learning and teaching experience that supports the desired business-related graduate outcomes for improved workplace competences by bringing together cognitive, behavioural and environmental (or external) influences. This is not to claim that SLT controls student learning or even student interaction. Understanding SLT gives the instructor an opportunity to shape a process that must rest with each and every student to build their own IC. Second, SLT builds IC by students undertaking self-reflection on the efficacy of the study preparation and knowledge. This is consistent with the most advanced model of student learning (the 3P originally proposed by Biggs, 1979). In introducing self-efficacy (i.e. capability to successfully execute behaviour) which provides a means to promote self-confidence towards learning new and unfamiliar material, Bandura (1977) provides the means to understand how IC is built at the micro-level. SLT is therefore used as the lens through which to view the building of IC by students.The micro-IC flow used for analysing flows of knowledge is presented in Figure 1. It follows Penrose (1959) in suggesting that there is a close relation between the various kinds of resources used and the development of the ideas, experience and knowledge of the humans associated with those resources. That is, resources are a potential that need to be made productive through services, "but experience itself can never be transmitted; it produces a change - frequently a subtle change - in individuals and cannot be separated from them"(p. 53). This model differs from the categorical models which are dominated by social capital creation (e.g. Nahapiet and Ghoshal, 1998), improved leadership (e.g. Johnson, 1999) and "knowledge, experience, expertise and associated soft assets, rather than hard physical and financial capital" (Klein, 1999, p. 1).The model has four interacting elements. There must be a diverse knowledge base (Penrose, 1959). This is created by the instructor assigning students to teams on the basis of self-reported characteristics so that the team comprises different work experience, academic background and proficiencies (Oliver and Coyte, 2011). There must be motivation for establishing and maintaining communication (Massey, 2001). It is provided by all the members being given equal status and being expected to contribute. It is reinforced by the psychology of the in-group with the other groups in the classroom, the out-groups (Evered and Louis, 1981). A common problem or issue (Schein, 1970) facilitates knowledge flows. In the case of the present study, the task is the practical business-oriented activity which occurs inside the classroom. Finally, the providers and recipients of knowledge must be credible sources of knowledge (Lave and Wenger, 1991). This is achieved by requiring the students to familiarise themselves before class with the basic analytical frameworks and the straightforward applications of them. The interconnected nature of Figure 1 indicates that none of these elements is dominant and their close coupling is what creates knowledge flows and builds IC.
3. Methodology: Since the micro-IC flow model is being considered, it is necessary to specify an epistemological basis and for this Crotty (1998) is used to both determine and justify the methodology in this paper. Since the research question is a pragmatic one, to establish a micro-IC flow in the classroom, epistemology is of concern because the subject matter deals with knowledge (the recipient's stock) and knowing (the provider's stock). Since students are largely unaware of their own IC creation we adopt a constructivist viewpoint. The data were collected in a case study (Yin, 2003) as it uses multiple data collection and analysis methods to provide a rich set of data. The case study is longitudinal, commencing in 2009 and continuing to ensure consistency with the objective of "transformative redefinition" (Alvesson and Deetz, 2000, p. 19). The case study source is an entry-level postgraduate unit "Managerial Accounting and Decision Making". There are multiple streams each with a three-hour class size of usually between 36 and 48. Teams are created comprising six members and there are typically six to eight teams in each class. Each week it focuses on the management accountant's work as an internal business advisor in conducting and analysis of a scenario and recommending a managerial decision on cost accounting, pricing, inventory, budgeting, variances and strategic controls and performance. This approach itself was the outcome of applying the Alvesson and Deetz (2000) framework to an IC practice, that of teaching. Figure 2 indicates the activities completed in class that are intended to build IC for students.In summary, students arrive in class and there is a readiness assurance test which focuses on the essential topic knowledge. After completing the individual RAT, the students complete the same test as a team, during which they gain feedback on the correct answer. The team discusses their answer after sharing knowledge, and peer teaching of concepts and their application to one another. This aspect of their learning draws on the team-based learning of Michaelsen (2004a). Consideration of both individual and team performance forms the basis of a short remedial lecture (Michaelsen, 2004b) which provides feedback and leads into the application of their knowledge (Michaelsen, 2004c).The majority of the class time is taken up with an application of their knowledge in a business problem designed by the unit coordinator around the week's topic involving quantitative and qualitative issues, as well as moral dilemmas drawn from the instructor's business experience. The students perform a preliminary analysis of the evidence tabled in class and have to select a business decision from six equally attractive options. All the options are equally plausible, so teams are not avoiding "the incorrect" options but have to evaluate the relative merits of each of the alternatives, given the issues in the case and context of the business. All their business decisions are simultaneously revealed and then each team has to justify its chosen business decision and respond to challenges made by other teams. There are additional steps in considering implementation and reflection on learning.
4. Findings: In conceptualising the knowledge flows, the simplest approach (e.g. Bontis, 1999) is to consider the students as nodes and ask what knowledge is flowing from and to them. A partial answer to this question is summarised in Table III.It is a partial answer because the knowledge flow is a combination of explicit and tacit knowledge (Polanyi, 1966) and the student has to make sense of the knowledge flow either for themselves or by initiating discussion with other members (Brookfield and Preskill, 1995). For this reason Table III above identifies the student as an individual and the team as an entity with student members. However, unlike Bontis (1999, p. 445) who suggests that the intermediate product or information from a given node centres on work performed, our view of micro-IC is that a node is a temporary knowledge stock.While for business, knowledge stocks and flows are closely related to business performance (Cohen and Levinthal, 1990) although their relative contributions are unclear for students, the flow is essentially a potential. In business, misalignment acts as a "detriment to the overall efficiency of the organisation" (Bontis, 1999, p. 453). In the classroom, any misalignment can be corrected by intervention from the instructor during the class or through providing feedback. So this research supports a micro-view of knowledge flows in two ways: by confirming the advantages of adopting a micro-analysis and by showing that IC flows do not directly and immediately change IC stocks. This two-step approach is further discussed.The micro-IC flow model suggests that the traditional focus of students on the lecturer and lectures under-estimates both the sources of IC and the kinds of IC that they build. With regard to the sources of IC, it is apparent that in a team-based classroom environment a substantial proportion of the IC which is accumulated comes from the other team members rather than the instructor[2]. To understand this, a focus group was conducted to explore whether students were conscious of the knowledge flows in the classroom. The consensus among the students was that when the instructor was talking they considered the material to be exposition and commentary on the textbook, e.g.:Like the others [here in the focus group] I know that I must learn this. I know we have been told [by the instructor] that we should do questions but they there is not much time left after doing all the reading.I am ok at problems. Anyway, it is the theory I have problems with. So I try to read the chapter more than once.This predilection for and veneration of codified knowledge was reiterated in different ways:My textbook is second hand [...] I don't need to make notes. [Because] everything I need [to know] already underlined.I don't have paper notes anymore. I keep everything on my iPad. It is easy to find what I need to know. I think we should be able to use the iPad in the class.What became apparent was that the knowledge flows engendered subtle moral IC. Students had not always seen the helping behaviour of other team members in terms of their own adopting of helping behaviours. When asked to describe how problems were solved in the team, the students, in a stream-of-consciousness commentary, primarily recalled the important contributions made by team members:I knew the formulas for variances from [when I was] an undergraduate. When we talking they [the team] showed me how I could simplify my calculations by using a common multiplier.My uncle had told me he used pricing in his distribution business but I didn't understand it all. I found out from my team members what he was probably doing.Using SLT reveals some knowledge flows that are normally ignored: first, there are unspoken knowledge flows as students observe the behaviour and performance of other teams leading them to rate their own and their team's performance; and second, there are residual images of behaviour at the sensory and emotional levels concerning the level of engagement of teams, their reaction to feedback and subsequent performance in later classes. In practice, allowing students to take an authority role in their team seems to increase the number of knowledge flows. However, the flow of knowledge to create IC appears to be both under-estimated and under-appreciated owing to the source and destination for many of the knowledge flows being the students themselves.Table IV reveals that knowledge stocks are refreshed, supplemented and discarded by knowledge flows. As stated above, despite knowledge flows occurring, there is a lag between the knowledge flow and building IC. Some of the lag is due to different rates of knowledge consolidation. Discussion with students makes clear that even providing them with clear explanations does not guarantee immediate consolidation into IC. Cognitive psychology suggests that this is not uncommon. Thus it is reasonable for students to be equivocal when asked about their stock of knowledge. As knowledge flow occurs in team task activities (shown in Figure 2), when attention is drawn to the different knowledge nodes (from Table III) and the different types of know-how (Table IV), it appears to be more successful in helping students build their IC, particularly when the instructor announces they are adopting the role of coach. A micro-IC view therefore does not rely on critical knowledge flows as is often the case with business performance (e.g. Bollen et al., 2005) although it is necessary to recognise the importance of collaboration (Murray and Moses, 2005). This suggests that many IC-related artefacts (such as listening, probing, seeking additional information and declaring their opinion) are not perceived as knowledge flows. Much of the IC literature has been concerned with categorisation problems rather than understanding what the participants regard as IC. This is contrasted with organisations where the relationship between IC and competitive advantage is clearer.
5. Conclusion: This research presents a simple basis for assessing micro-IC using knowledge flows. It shows the importance of integrating IC with research in the domains of knowledge management, organisational learning and educational performance. A micro-IC approach is consistent with the broad objectives of IC in that it seeks to transform student learning by using classroom practices to show students the benefits of making the change to their learning behaviour, and then reinforcing it through productive team work oriented to professional effectiveness and career readiness recognising the place of teams in business (e.g. Collins and Clark, 2003). A micro-IC view is compatible with any teaching approach that combines problem solving and interactive communication, building technical and generic skills as well as attributes (Ramsden, 2003).The main limitation of the present case study lies in its specificity with regard to the discipline, although the knowledge flows and micro-IC can be regarded as potentially generic. While additional studies should demonstrate the value in micro-IC, there remains the problem that there is no causal connection between teaching and learning and building IC. It is for this reason that emphasis is placed on generic business skills rather than discipline knowledge. The description of the methods used (readiness assurance test, remedial lecture and business practical to apply knowledge) is not a prescriptive. The methods adopted have proven successful, but they are themselves evolving as the IC of the unit coordinators is built. As well as their own teaching skills, it extends to their ability to innovate value adding changes to the teaching methods and materials. However, the IC of the instructors involved in the unit of study is a separate paper.The advantage of discussing a business problem that they are likely to encounter in the workplace is that the knowledge flows are expanded from being reciting memorised facts into discussions of how topic knowledge is applied. This makes the knowledge flows more closely aligned with the kinds of desirable IC that employers seek, such as adaptability, quick thinking and considering a problem from many dimensions. In making workplace competencies the primary focus, much of the feedback that occurs in incidental knowledge flows is regarded as "inherently rewarding" (Bandura, 1977, p. 163) as they successfully create their own solution rather than search for an extant solution and the "correct" answer.The second area in which micro-IC is built is in making and defending decisions where a shared interpretation and joint analysis of a business practice leads to their topic knowledge being reshaped, applied and adapted to the context of the business practical case. To reach a recommendation, the team negotiates, evaluates, synthesises and communicates financial and non-financial information without the normal information overload which an individual can experience, allowing students to admit weaknesses and elicit assistance from other team members allowing all members to reinforce material which they have already studied but have not fully appreciated. Here the IC that is built concerns the readiness to consider alternate opinions and look for criticism. With guidance from the instructor, teams accumulate IC through efficacious social learning experiences that involve secondary skills such as communication, analysis, synthesis and evaluation that they can integrate with their professional and personal skills.In considering the impact of micro-IC beyond the classroom, the author relies on unsolicited personal communications provided by students after completing their studies and since the adoption of this method in 2009. The first concerns improved student performance during recruitment interviews by showing that they are indeed "job-ready". One student instanced an interview with KPMG in which they both found themselves able handle what in the past they considered to be difficult interview questions. The second concerns the confidence-building experience from having analysed and recommended decisions. An already employed student referred to actively seeking collaborators from other departments when assigned tasks. They had previously worked alone "to get the job done quickly". A student in an internship was offered a position on the basis of their leadership in looking for a range of alternatives rather than fixating on one solution.Consistent with the bottom-up micro-IC approach outlined, these possibilities outside the classroom are student dependent. It would therefore be erroneous to generalise from these instances to suggest that these are the typical outcomes that result. On the other hand, it does seem reasonable to alert students that there is a delayed return on the social capital which they have built as a result of their active participation in knowledge flows throughout the semester. Using this method is therefore likely to produce students who remember this kind of learning experience in the same way that have been reported for teachers of pupils in their early years.There remains scope for future research with a critical management focus. SLT with its emphasis on observation and modelling of behaviour could be used to explore student micro-IC from the point of view of the kinds of activities in the classroom. Consistent with the earlier discussion on stocks and flows, the class activities would require identification of both the task and its outcome to determine the nature of the micro-IC being created. Outcomes in terms of career path and job responsibilities could be tracked via the alumni database with random sample follow-ups to ascertain the extent to which the student attributes their current capabilities to their studies and the unit of study that is the subject of this paper. This method would need to consider both the ambiguous temporal precedence and history threats to validity. These could be minimised by a longitudinal beyond the classroom study particularising the knowledge flow each participant considered most important for them.This paper has focused on the micro-IC inside the classroom using Bandura's (1977) SLT. The advantage of SLT is that potentially its self-efficacy elements can be re-applied to knowledge flows within business to use a micro-IC value perspective rather than the current measurement of stocks, including in some cases, their conversion into currency amounts. This would offer businesses a less rigid view of IC, which would be particularly appropriate where the IC develops in a context with new knowledge being added and older knowledge having to be discarded and all knowledge continually being churned. Finally, it would have the benefit of allowing the employees themselves to become aware of their own micro-IC.
|
An absorptive capacity interpretation of Six Sigma
|
[
"Six sigma",
"Operations management",
"Performance management"
] |
Summarize the following paper into structured abstract.
Introduction: Six Sigma continues to emerge as a key business improvement approach in many leading organisations. The number of publications on the technology and management aspects of Six Sigma has significantly increased from the late 1990s onwards (Antony, 2005; Buch and Tolentino, 2006b; Nonthaleerak and Henry, 2008). Such development is reflected in both the increased practitioner and academic literature. For example, the ABI/inform databases shows an increase of circa 500 percent in peer-reviewed articles between the years 2000 and 2007 (the 2008 trend is similar to date). However, Six Sigma remains enigmatic as a change management philosophy or methodology due to a lack of consistent development and integration of theory and practice in the literature and academia (Llorens-Montes and Molina, 2006; de Koning and de Mast, 2006; Nonthaleerak and Henry, 2008). Llorens-Montes and Molina (2006) conclude that "Six Sigma is an organisational phenomenon that has been given little research attention".More recently, a number of studies have focused on Six Sigma as a change management approach at a strategic level beyond its initial statistical definition (Friday-Stroud and Sutterfield, 2007). These studies include applications of Six Sigma in other sectors such as service industries (see, for example, Proudlove et al.'s (2008) study in healthcare; Stamatis's (2003) study in finance or Furterer and Elshennawy's (2005) study in local government).The paper uses the broad conceptual framework of absorptive capacity to show both how Six Sigma can contribute to the dynamic capability of an organisation, and also act as a structure for the literature review, in relation to its four related constructs of acquisition, assimilation, transformation, and exploitation of new knowledge or technology (i.e. Six Sigma) within an organisation. Therefore, the aim of this paper is to critically review the literature of Six Sigma using a consistent theoretical perspective and framework to guide and underpin the review process, namely absorptive capacity.
Definitions - what is Six Sigma?: In strict definitional terms "sigma" is a Greek alphabet letter that denotes that the standard deviation used to describe variability and is applied as a statistical process technology measure in organizations. As shown by Goh (2003) and Breyfogle (2003), a sigma quality level offers an indicator of how often defects are likely to occur in the process being reviewed; the higher the sigma level the less likely it is that a process will create defective parts. The sigma levels and corresponding defect levels are derived from the normal probability distribution curve for an organizational process. These levels are expressed in terms of defect parts per million opportunities (DPPMO): Sigma 2 level - 308,537 DPMO; Sigma 3 level - 66,807 DPMO; Sigma 4 level - 6,210 DPMO; Sigma 5 level - 233 DPMO, and Sigma 6 level - 3.4 DPMO (from Pande et al. (1999)). As a result, the term "Six Sigma" has developed as an aspirational quality measure for organizational processes (a "good" organization usually being "four Sigma" or higher). Therefore, the main theme of Six Sigma is that of focusing on reducing variability in processes (Antony, 2006).Some writers claim that Six Sigma has started to develop beyond that of a technology based statistical process approach towards that of a broader change management philosophy and approach over the past ten years (Schroeder et al., 2008; Choo et al., 2007; Buch and Tolentino, 2006b; Wiklund and Wiklund, 2002). They contend that this development has been driven by increasing global competitiveness, which has resulted in Six Sigma being "not referred to as a quality tool, but rather as a business strategy" (Breyfogle, 2003, p. 2).
Review of Six Sigma theoretical development - absorptive capacity perspective: From a statistical and a historical perspective the development of Six Sigma theory is based on process control theory (Thomas and Barton, 2006; Nonthaleerak and Hendry, 2006). Nonthaleerak and Hendry (2008) contend that development of Six Sigma in academia is still in its infancy. The theoretical discussion in the literature of Six Sigma in this wider context has developed from circa 2003 onwards but is limited to representations of specific features of Six Sigma using an eclectic range of theories (Llorens-Montes and Molina, 2006; Linderman et al., 2003). As shown by Schroeder et al. (2008) and Nonthaleerak and Henry (2008), there is no coherent and overarching body of theory which has emerged to underpin or drive Six Sigma developments in practice (Gowen and Tallon, 2005).Linderman et al. (2003) and Choo et al. (2007) suggest that a "goal-theoretic" approach to Six Sigma enables the overall benefits, goals, and purpose of Six Sigma to be explored within a wider management context. They emphasise the need for clearly specified goals (e.g. operations process limits) and increased goal difficulty (e.g. achieving Six Sigma levels in process defects - 3.4 defects per million opportunities - DPMO) leading to improved performance in Six Sigma. However, the goal-theoretic approach does not explain the dynamics involved in adopting and legitimising Six Sigma within organisations and within, or as part of, other organisational change programmes (Strang and Jung, 2009). Paralleling the goal theory approach Buch and Tolentino's (2006b) use the theory of work motivation to study the effect of Six Sigma-based rewards on employee motivation to participate in Six Sigma projects. While this theory is useful it is localised in that it does not address the knowledge acquisition aspects of Six Sigma and its synergies with other change management approaches (Zahra and George, 2002).Gowen and Tallon (2005) and Llorens-Montes and Molina (2006) examine theoretical conceptions of Six Sigma from an economic perspective, focusing on the effect of technology intensity on adoption of Six Sigma. Their work uses resource-based view (RBV) and dynamic capabilities theory to probe acquiring Six Sigma competencies for sustainable competitive use of resources. However, these theoretical approaches do not address goal theoretic, motivation, and behavioural theories of Six Sigma other than leading to agency theory, reflecting the belt system of Six Sigma training and development (Llorens-Montes and Molina (2006).McAdam et al. (2005) view process based characteristics of Six Sigma as having mechanistic process-based components in conjunction with total quality management (TQM), ISO and other business improvement methodologies. Mechanistic theory supports the application of approaches to Six Sigma, which has specified goals and targets (Linderman et al., 2003; Choo et al., 2007). Spencer (1994) shows that organic theory of TQM is phenomenological in perspective and supports inquiry into meaning, subjectivity, and learning experience within change management programmes (McAdam and Lafferty, 2004). However, mechanistic and organic conceptions of Six Sigma, while useful in highlighting dichotomies, do not show the complexity and dynamics associated with Six Sigma adoption and development at multiple levels in organisations (Daghfous, 2004).A summary of representative theoretically linked empirical survey studies of Six Sigma is shown on Table I, which shows the diversity of theoretical perspectives used and the tendency to focus on a single aspect of Six Sigma.Cohen and Levinthal (1990), Zahra and George (2002) and Lane et al. (2006), amongst others, have developed the concept of absorptive capacity as a specific dynamic capability that enables both RBV and organisational learning theories and constructs to be represented within the "multidimensional construct" of absorptive capacity (Cohen and Levinthal, 1990). In seeking to review and advance the limited but diverse theoretical developments of Six Sigma an overarching absorptive capacity framework (Figure 1) is suggested.Daghfous (2004) and Zahra and George (2002) define the four dimensions of absorptive capacity as acquisition, assimilation, transformation, and exploitation (Figure 1). Within each of these dimensions there are underlying routines which are dynamic and interrelated. Absorptive capacity posits that these organisational routines enable a specific technological change (such as Six Sigma) to be adopted, developed and to produce benefits (Lane et al., 2006). Gowen and Tallon (2005) suggest that a dynamic capability representation of Six Sigma can provide a framework for understanding both the technical and human aspects of Six Sigma. Table II has been constructed from the work of Cohen and Levinthal (1990), Zahra and George (2002) and Daghfous (2004) to show how absorptive capacity can be viewed as a dynamic capability with the four key dimensions and elements. These dimensions and elements (or "influencing factors" - Zahra and George, 2002) have enabled specific inquiry based questions (right hand column of Table II) to be posed in relation to treating Six Sigma as new knowledge to be effectively absorbed within an organisation.Table III shows how existing and eclectic elements of Six Sigma theory (from Table I) can be placed within an absorptive capacity framework showing the paucity theoretical representations that cover all four related dimensions of the framework and hence limit the potential of representing Six Sigma as new knowledge or technology that can be dynamically absorbed within an organisation based on its dynamic capability or absorptive capacity (Cohen and Levinthal, 1990; Lane et al., 2006).It is therefore suggested that taking an absorptive capacity perspective to Six Sigma and its application within organisations will enable researchers and practitioners to further probe and understand the multi-faceted and dynamic nature of this phenomena. The rest of the paper uses the absorptive capacity framework and its four dimensions, namely acquisition, assimilation, transformation, and exploitation (Figure 1; Table I) to structure the critique of the literature and to help in determining further research agendas.
Acquisition of Six Sigma: The acquisition dimension of the absorptive capacity as a dynamic capability (Table II) relates to an organisation's ability to recognise, value and acquire new knowledge or a technological change competency (de Mast, 2006; Zahra and George, 2002; Daghfous, 2004), namely Six Sigma in the current context. The questions derived in Table II have been used as a guide to structure the inquiry in this section.The literature suggests that organisational interest in Six Sigma is triggered, to at least some degree, by perceptions of its "newness" or distinctiveness in comparison to other business improvement approaches. The question of newness (Table II), which is central to decisions to acquire Six Sigma, identifies a number of issues within the literature. Dahlgaard and Dahlgaard-Park (2006) inquire if Six Sigma is "old wine in new bottles"? A number of writers suggest Six Sigma is more or less completely new in comparison to other business improvement approaches and hence offers new opportunities and distinctive competencies (de Mast, 2006; Rylander, 2006). However, others emphasise the complementarities, overlap and path dependency of Six Sigma with existing knowledge and approaches such as TQM, business process re-engineering (BPR), and lean manufacturing (Pinto et al., 2008; de Koning and Mast, 2006; Furterer and Elshennawy, 2005), suggesting, consistent with absorptive capacity tenets, that prior investment in these forms of knowledge will create the need for the acquisition of Six Sigma as a further development of existing knowledge (Zahra and George, 2002).From a pragmatic standpoint Daghfous (2004) states that environmental scanning and external benchmarking to acquire new competencies relies on both newness and overlap with existing competencies to increase confidence for potential organisational fit. Grint (1997) indicates new or emergent approaches to management and business improvement ultimately have a degree of overlap in their historical ancestry of underlying assumptions. Tennant (2001), Benedetto (2003) and Goh (2003) refer to the historical roots of Six Sigma in relation to statistical process control as developed by Shewhart and Deming and see the current movement as being derived from these mechanistic beginnings. It is suggested that newness of Six Sigma is a relative term affected by the historical development of business improvement approaches and specific organisational context (Kumar et al., 2006).In relation to external achievements or benefits from Six Sigma (Table II), a large number of articles and books use case material and benchmarking studies from leading large international organisations (such as GE, Eckes, 2001; Motorola, Pande et al., 1999) in claiming that Six Sigma can enable a radical improvement in key performance measures. From the internal perspective, the literature focuses on the failure of existing or current business improvement approaches to address technological and market driven change, hence creating the need for a new approach, namely Six Sigma. For example, Six Sigma is portrayed as overcoming the failure of TQM and ISO 9000: 2000 to produce radical as opposed to incremental change (Black and Revere, 2006; Raisinghani et al., 2006) and the failure of BPR to address both people and process issues in a holistic manner (Benedetto, 2003). A further factor in acquiring new competencies is the level of prior investment (Zahra and George, 2002; Lane et al., 2006). An organisation that has invested heavily in business improvement approaches with allied technology may view Six Sigma as a further extension of existing competencies, while an organisation that has invested little may perceive Six Sigma to have a much higher level of newness (Pinto et al., 2008; Black and Revere, 2006; de Koning and Mast, 2006).In relation to who makes adoption decisions (Table II), the literature emphasises the role of the top management team and the managing director (or equivalent) as opposed to a more bottom up decision-making process (Llorens-Montes and Molina, 2006). The case evidence shows that these high-level decision makers can be influenced by management fashions and fads more so than the organisational practitioners who regularly use existing business improvement approaches (Bailey et al., 2001; McAdam and Lafferty, 2004). Hence, some of the rhetoric associated with Six Sigma may overemphasise its relative newness and radical effectiveness to those making decisions to acquire the approach (Schroeder et al., 2008; de Mast, 2006; Hensley and Dobie, 2005).
Assimilation of Six Sigma: The assimilation dimension of absorptive capacity (Figure 1) relates to an organisation's ability to understand and learn from acquiring Six Sigma (Daghfous, 2004; Zahra and George, 2002). The elements and questions associated with this dimension are shown in Table II and are used to structure and guide the inquiry in this section. The dynamic capability representation implies that effective assimilation of Six Sigma must be linked to, and build on, effective acquisition and its elements.The literature indicates that the hierarchical top down ethos of Six Sigma is reflected in organisational attempts at learning and assimilating the approach (Llorens-Montes and Molina, 2006; Linderman et al., 2003; Buch and Tolentino, 2006b). A number of supportive, mainly statistical based, tools and techniques have been developed along with rigorous training procedures involving distinct levels of competency denoted by different coloured belts, culminating in the titles of Champion, Master Black Belt, Black Belt, Green Belt, and Project Champion (Nonthaleerak and Hendry, 2008; Ingle and Roe, 2001; Eckes, 2001).Llorens-Montes and Molina (2006) suggest that the Six Sigma belt system reflects agency theory where agents are seen as vital to cascading the change message throughout the organisation to achieve its objectives. Key individuals who have undergone intensive Six Sigma training are emphasised as being the acknowledged authoritative experts (de Koning and Mast, 2006) and champions (Ingle and Roe, 2001) of Six Sigma (e.g. Black Belts, Eckes, 2001). The role of trained process level teams (Table II) in carrying out Six Sigma-based improvement projects is also emphasised in the literature (Banuelas et al., 2006; Henderson and Evans, 2000) as being vital to its success. There is a paucity of literature on the mechanisms for the proposed empowerment of teams (Dahlgaard and Dahlgaard-Park, 2006; Little, 2003).In terms of structure and processes (Table II), Schroeder et al. (2008) and Buch and Tolentino (2006b) suggest that the Six Sigma structure suffers from being a "parallel learning structure" with its emphasis on Six Sigma agents and statistical routines.The top down agent based approach focuses on "how-to", step-wise implementation approaches (Breyfogle, 2003; Pande et al., 1999). The key steps are:1. Define phase. Including process mapping, data gathering, and problem definition.2. Measure phase. Process performance measures attached to the process maps.3. Analysis phase. The application of statistical methods to the data.4. Improvement phase. The application of tests such as design of experiments.5. Control. Ensuring the improvements is sustained.These steps form the define, measure, analyse, improve control (DMAIC) acronym which is central to Six Sigma descriptions in the literature. These steps, which claim to support the successful implementation of Six Sigma, are contextual (e.g. Antony, 2005 - small- and medium-sized enterprise (SME) context). For example, step (2) is likely to differ in large and small organizations due to differing skill levels as found by Antony (2005) in a study of Six Sigma in SME's compared to large organisations. Nonthaleerak and Hendry (2008) critique the DMAIC formula based on a multiple case analysis (n=9), suggesting that the define and control steps have limitations as discussed earlier in that they may lead to more operational than strategic projects.There are some brief comments on leadership (Table II) for Six Sigma (Kuei and Madu, 2003) and some which briefly raise the empowerment-control tensions (Dahlgaard and Dahlgaard-Park, 2006; Wiklund and Wiklund, 2002). However, there is little emphasis on leaders acting in an inclusive manner to widen participation in Six Sigma (as suggested for organic based TQM, Spencer, 1994; Choo et al., 2007).The Six Sigma literature relating to learning (Table II) can be classified under an action learning typology as shown in Table IV (adapted from Marsick and O'Neil (1999)).A number of writers suggest that the majority of Six Sigma learning takes place in an action orientated environment involving problem solving and causal analysis of organisational operations processes (Antony, 2006; Little, 2003). In the typology of action learning shown on Table IV, the scientific learning category covers a systems approach to learning for individuals and teams with an emphasis on problem definition and internal and external system understanding. As shown in Table IV, the training and development of Six Sigma agents comes into this category along with the top down style of training for Six Sigma teams which focuses on statistical tools and techniques (Antony et al., 2008; Little, 2003) and defect causes and prevention (Breyfogle, 2003). Experiential learning (second column, Table IV) covers action, reflection, theory and practice, and is similar to Kolb's learning cycle, with a specific focus on learning in addition to problem solving (Box, 2006). There is less sigma literature in this category with learning outcomes tending to be of secondary consideration in comparison to process issues such as defect reduction and implementation. Llorens-Montes and Molina (2006) suggest that Six Sigma projects provide a good setting for double loop learning (Table IV, right hand side), however, the overriding Six Sigma mantra can inhibit reflective learning which sees the need for more inclusive approaches to organisational change. Interestingly, the scant evidence of critically reflective learning has come, not from Six Sigma applied in manufacturing organisations, but in studies of healthcare where the mechanistic norms may be less prevalent (Revere and Black, 2003). Jeffery (2005) develops a critically reflective action research approach within the traditional DMAIC framework to Six Sigma projects which enables wider aspects of organisational development and methodologies to be explored.A key assumption in Six Sigma training and development is that employees at all levels (Table II) will be capable of understanding or assimilating Six Sigma concepts, tools, and techniques (Little, 2003; McAdam and Lafferty, 2004). However, there is a lack of evidence in the literature to show that employees across a range of organisational levels and functions can demonstrate that they fully comprehend the statistical routines of Six Sigma (Nonthaleerak and Hendry, 2008). If, as suggested by Pande et al. (1999), the statistical basis of Six Sigma is taken as the central argument for its newness, then organisational understanding of that statistical basis is required. Possibly, the overriding authority of the Six Sigma agents as stressed in the literature (Breyfogle, 2003; Little, 2003) is a tacit acknowledgement that widespread in-depth understanding is infeasible given the statistical complexity of Six Sigma. This issue may also go some way to explaining why Six Sigma teams use a considerable amount of simplified TQM-based tools and techniques as opposed to relying purely on statistical routines. Ultimately, these tools and techniques detract from the arguments of Six Sigma as being entirely new (Grint, 1997).
Transformation using Six Sigma: The transformation dimension of absorptive capacity goes beyond that of acquisition and assimilation and focuses on organisational ability to develop routines that facilitates combining existing knowledge with the acquired and assimilated Six Sigma knowledge (Zahra and George, 2002; Daghfous, 2004). In other words, the transformation dimension covers how Six Sigma works within an organisation. Once again the elements and questions associated with this dimension are shown in Table II and are used to structure and guide the inquiry in this section. Zahra and George (2002) suggest that moving from assimilation to transformation represents a move from potential absorptive capacity to that of realised absorptive capacity (RACAP) as shown in Figure 1. The dynamic capability representation implies that effective assimilation of Six Sigma must be linked to, and build on, effective acquisition and assimilation and their respective elements. Thus, a failure of Six Sigma application to achieve its potential in an organisation may possibly not be a failure of the methodology but may relate back to issues relating to ineffective acquisition or assimilation, which has been overlooked in many studies.The literature relating to the Six Sigma transformation dimension can be broadly divided in operational and strategic elements (Table II) (Yang and Yeh, 2007; Friday-Stroud and Sutterfield, 2007; McAdam et al., 2005). Operationally focussed literature emphasises the contribution of Six Sigma to improving the operational processes and adopts a statistical and mechanistic definition of Six Sigma from a mainly statistical process control theoretical position (Antony et al., 2007a, b; Nonthaleerak and Hendry, 2006). The strategic Six Sigma-based literature focuses on Six Sigma as a strategic change management approach that can influence the strategic effectiveness and direction of an organisation and uses a more eclectic range of theories as shown in Table II.The literature can be further divided into contributions which emphasise the unitary use of Six Sigma (Pande et al., 1999; Breyfogle, 2003) and those which suggest the need for a combinations of other business improvement approaches (Table II) with that of Six Sigma such as TQM or continuous improvement (Bendell, 2006; Yang and Yeh, 2007). The combined approach raises some conceptual difficulties. The current status of business improvement approaches such as TQM is recorded as becoming increasingly organic and focused on people and cultural issues in organizations from all sectors (Dahlgaard and Dahlgaard-Park, 2006; Grint, 1997). McAdam et al. (2005) argue that it is difficult to reconcile this type of change approach with a purist Six Sigma philosophy, which is ultimately based on mechanistic statistical process measures and which has its antecedents in mass manufacturing organizations (Schroeder et al., 2008).A classification of the Six Sigma literature in relation to its links (Table II) with other significant business improvement initiatives (Table V) reveals that from a historical and developmental perspective Six Sigma is intrinsically linked to the development of TQM (Pinto et al., 2008; de Mast, 2006; Revere and Black, 2003).For example, Buch and Tolentino (2006b) suggest that Six Sigma is an approach to organisational change that incorporates elements of TQM, BPR, and employee involvement and that it is a specific approach to TQM. In a similar vein de Koning and Mast (2006) and Pinto et al. (2008) suggest that Six Sigma is a key step on an organisations journey towards total quality, where Pinto et al.'s (2008) study found that investment in TQM led to increased likelihood of effective implementation of Six Sigma. The papers which link Six Sigma to BPR (Table V) are mainly based on Six Sigma's mechanistic or technical foundation in relation to process measurement and control, rather than being intrinsically linked to the historical development of BPR (Benedetto, 2003). Surprisingly, there are relatively few studies on the links between ISO 9000 (1994 or 2000 versions) and Six Sigma (Benner and Tushman, 2002). There is little evidence in the literature to support the view that Six Sigma has developed out of ISO, other than the joint use of mechanistic terminology such as "control" and "corrective action" (McAdam et al. (2005)). In a longitudinal study of patenting activities in relation to process management, Benner and Tushman (2002) conclude that lean manufacturing and Six Sigma offer roadmaps for those on a quality journey to TQM, thus integrating while cautioning against sub optimisation by referring to "Lean Six Sigma" or "Lean Sigma" where important distinctions and incompatibilities can be ignored and that the literature on examining theoretical and methodological compatibility is limited. Bendell (2006) shows that lean manufacturing emphasises the elimination of all waste, using a range of tools and techniques, while Six Sigma has an emphasis on prioritisation and statistical methods.The literature relating to Six Sigma in a strategic context suggest that it is a key part of business strategy and the review process for forming such strategies (Table VI).Writers such as Yang and Yeh (2007), Black and Revere (2006), Goh (2003) and Kuei and Madu (2003); amongst others, contend that Six Sigma can offer the possibility of transforming new knowledge (Table II) and has now broadened beyond its statistical and operational roots and become a guiding strategic influence for organizations. From a strategic perspective Kuei and Madu (2003) suggest that Six Sigma has become "customer centric" (Table VI) enabling organizations to form strategies to achieve customer satisfaction, citing examples of both manufacturing (Motorola) and service (Citibank and the Ritz-Carlton hotel chain) organizations. This literature uses the comparative argument of TQM development as evidence of this transition by suggesting that TQM developed as a strategic change initiative after initially focussing operational improvement projects (McAdam et al., 2005).In relation to the use of people factors in transforming Six Sigma knowledge and technology (Table II), the Six Sigma literature relating to the transformation dimension of absorptive capacity from both operational and strategic perspectives places emphasis on reward systems linked to specific Six Sigma goals and targets in organisations. For example, GEC and Seagate Technology have a bonus and promotion system based on Six Sigma achievement and Black Belts at Bombardier Aerospace have Six Sigma-based bonuses and promotions linked to organisational performance (Buch and Tolentino, 2006a, b) which support the goal-theoretic and motivational theories of Six Sigma (Llorens-Montes and Molina, 2006; Linderman et al., 2003; Schroeder et al., 2008). Buch and Tolentino (2006b) and Choo et al. (2007) found that there is need for employees to value opportunities to acquire new knowledge and skills or the valence of Six Sigma outcomes of employees (i.e. compare the assimilation dimension of absorptive capacity) and that Six Sigma programmes should ensure that all these components are incorporated and used in a transforming manner within organisations, which is essential for increasing employee participation and ultimately the legitimacy of Six Sigma in organisations.
Exploitation of Six Sigma: The exploitation dimension of absorptive capacity relates to organisational ability to apply Six Sigma commercially to achieve organisational objectives (Figure 1). The exploitation dimension of absorptive capacity builds on the acquisition, assimilation, and transformation dimensions and probes how Six Sigma can increase competitiveness and profits (Zahra and George, 2002; Daghfous, 2004). The elements and questions associated with this dimension are shown in Table II and are used to structure and guide the inquiry in this section. Zahra and George (2002) suggest that moving from transformation to exploitation represents a further increase in RACAP (Figure 1) and relies on the dynamic capability representation in that transformation must be build on the effective and time consuming acquisition, assimilation, and transformation (and their respective elements - Table II) to fully realise the benefits of Six Sigma, an issue which is not fully explored in the literature. Thus, a failure of Six Sigma to achieve promised bottom line results may have its roots in lack of rigour in relation to the acquisition, assimilation, or transformation dimensions (Table II).In terms of business benefits (Table II), the Six Sigma literature relating to the exploitation dimension is divided along similar lines as that of the transformation literature, namely operational and strategic. There is a strong emphasis on operations-based business outcomes or benefits consistent with the mechanistic perspective (Choo et al., 2007; Kwak and Anbari, 2006), with the underlying assumption that increased operational efficiency will ultimately lead to improved business outcomes and strategic performance (Johnson and Swisher, 2004). de Mast (2006) sees Six Sigma benefits, in addition to operations equipment efficiency benefits, as including new competencies or assimilating new knowledge from an absorptive capacity representation and that resultant learning is a strategic benefit of Six Sigma (e.g. disciplined and effective problem solving and decision behaviour). Studies which include measures of Six Sigma benefits typically have a project or process level of analysis (Schroeder et al., 2008; Antony et al., 2008; Banuelas et al., 2006; Benedetto, 2003). These studies identify operational; benefits such as cost reduction, cycle time reduction, and process quality improvement measures due to less variation in the operations processes (Antony et al., 2007a, b; Banuelas et al., 2006; Henderson and Evans, 2000; Breyfogle, 2003). Consistent with the statistical basis of Six Sigma the overarching measure is that of defect reduction.Table VII also shows that a number of writers (Kuei and Madu, 2003; de Feo, 2001; Chowdhury, 2002) support the view that Six Sigma leads to improved customer satisfaction, which ultimately should have a strategic impact in relation to growth and competitiveness. A key tool is the "voice-of-the-customer" and Hoshins which are used to translate customer needs into product and service requirements (Llorens-Montes and Molina, 2006; Yang and Yeh, 2007). However, further analysis of these claims show that they are based on Six Sigma leading to improved operations processes which it is assumed ultimately leads to the customer receiving products with reduced variation (Henderson and Evans, 2000).In relation to the effect of sector and size on Six Sigma exploitation (Table II) most of the literature do not address these control effects systematically and are usually located in a single sector or organisation as summarised in Table VIII.From a sectoral standpoint the Six Sigma literature is predominantly manufacturing based, with mass manufacturing being the basis for most studies (e.g. Motorola, Dahlgaard and Dahlgaard-Park, 2006; GE, Eckes, 2001).However, there is evidence in both the academic and practitioner literatures that Six Sigma developments and applications in other organisational sectors and functions are growing rapidly (Chakrabarty and Tan, 2007; Antony et al., 2008; Sodhi and Sodi, 2005 - service-based pricing, Antony, 2005 - SME's; Table VIII), which is an indication that the Six Sigma discourse is deepening (de Koning and Mast, 2006; McAdam et al., 2005). However, the lack of systematic research to support the descriptive-based claims in service sector studies coupled with evidence that manufacturing and service areas of the same company have shown that the service areas struggle more with the quantitative measures (Proudlove et al., 2008; McAdam and Lafferty, 2004) and the more involved and complex people interactions as opposed to machine dominance means that additional research is needed to investigate the appropriateness and effectiveness of Six Sigma in a service environment (Sehwail and DeYong, 2003). Chakrabarty and Tan's (2007) review of Six Sigma applications in services concludes that the development is slow but increasing in terms of structure and more in-depth applications.Another sectoral development has been the application of Six Sigma process and statistical principles to non-manufacturing long run data streams. These applications are found mainly in the healthcare sector, both public and private due to government emphasis on health sector reform (see, for example, Peltokorpi and Kujala, 2006 - hip replacements; Morgan and Cooper, 2004 - rehab; Revere et al., 2004 - patient care). These studies involve specific applications of Six Sigma to specified business processes rather than institution-wide applications and include a range of stakeholders such as doctors, nurses, and health service management as opposed to a singular customer (Raisinghani et al., 2006).A number of writers have argued the case for extending Six Sigma to functions other than operations to include design in addition to operations and manufacturing processes (Johnson and Swisher, 2004 - R&D; Holcomb, 2003 - product development; Hong and Goh, 2003 - software; de Feo, 2001). These studies are mainly based on large companies, which have both design and manufacturing capability. The emphasis is on first applying Six Sigma to the manufacturing functions, then to follow up by applying the same principles to the design functions (Banuelas and Antony, 2003). In these situations, the emphasis is not on large amounts of process data but on the iterations within a design process, prior to a design freeze or final design.The application of Six Sigma to small business (Antony et al., 2008; Thomas and Barton, 2006; Davis, 2003) has encountered similar problems to that of ISO within small businesses. These problems include loss of inherent flexibility (Antony, 2005) and the tendency for small business to job-shop rather than engage in long runs (Davis, 2003). Thomas and Barton (2006) state that Six Sigma applications in SME's are generally poor due to high costs, SME uniqueness, complexity of the methods and lack of trained resources which inhibit the assimilation dimension of absorptive capacity. Some small businesses reluctantly apply Six Sigma due to pressure from large customers as part of a supply chain rather than seeing any business improvement benefit from its application (Conner, 2002). Kumar et al.'s (2006) case analysis found that a more broadly based hybrid Lean-Six Sigma approach was more useful than a more explicitly statistically driven methodology.
Conclusions and further research agendas: The Six Sigma discourse is continuing to develop and deepen. The earlier literature which was predominantly based on descriptive accounts of organisational applications (Hopen, 2009) have increasingly been complimented with empirical studies which attempt to link theory and practice with more critical analysis (Choo et al., 2007; Nonthaleerak and Hendry, 2008; Linderman et al., 2003; Buch and Tolentino, 2006b).The absorptive capacity framework employed in this paper has been found to be useful for comparing and juxtaposing the eclectic range of theories used in theory and practice-based studies (Daghfous, 2004; Gowen and Tallon, 2005). It helped to structure the literature and also enabled the review to probe the dynamic aspects of adopting and using Six Sigma by accepting absorptive capacity as being a dynamic capability of an organisation (Zahra and George, 2002; Lane et al., 2006). It is suggested that absorptive capacity has potential for use as an overarching framework for reviewing and analysing other significant business improvement and change management approaches (Daghfous, 2004).The increase in the scope of the Six Sigma literature is reflected in studies covering service sectors (Antony et al., 2007a, b; Chakrabarty and Tan, 2007), business functions other than operations (Peltokorpi and Kujala, 2006) and in SME's in addition to large organisations (Antony et al., 2008; Conner, 2002). It is concluded that these studies question some of the underlying mechanistic and statistical assumptions of a narrow definition of Six Sigma and support its claims towards that of a broader change management approach that influences both strategy and operations and with the need for concurrent theoretical development (Nonthaleerak and Hendry, 2006; Friday-Stroud and Sutterfield, 2007; Wiklund and Wiklund, 2002).There is general agreement in the literature that further research into all aspects of Six Sigma is needed due to the emergent nature of the discourse (Schroeder et al., 2008; Llorens-Montes and Molina, 2006; McAdam et al., 2005). From an acquisition perspective of absorptive capacity there is a need for research showing how Six Sigma acquisition decisions are taken. What is a basis for systematic Six Sigma acquisition decision making other than best practice studies (Yang and Yeh, 2007; Llorens-Montes and Molina, 2006)? Is there a systematic best practice environmental scanning of competing approaches? How is the acquisition decision take account of existing or overlapping competencies within the organisation? Research questions in relation to assimilation could address how effective Six Sigma leadership moderates obstacles to assimilation (Llorens-Montes and Molina, 2006)? How effective are current Six Sigma education and training approaches at multiple organisational levels (Lee and Choi, 2006)? How can HR practices, employee attitudes, and team based team climates contribute to a Six Sigma learning environment in Six Sigma projects and teams (Nonthaleerak and Hendry, 2008; Llorens-Montes and Molina, 2006)? From a transformation perspective questions include, how does Six Sigma integrate with the strategy formulation process, and become identified as a strategic approach? In what way does it compliment or conflict with existing strategic approaches such as the balance scorecard and the business excellence model (Creasy, 2009; Pinto et al., 2008)? How are Six Sigma projects selected using multiple criteria and how are these linked to strategic objectives and targets (Banuelas et al., 2006)? What types of organisational problems are best suited to Six Sigma (e.g. qualitative or quantitative, long run or short run, service or manufacturing (de Koning and de Mast, 2006)? What are the longitudinal effects of measures to address enablers and barriers to Six Sigma development (Antony, 2005)? Finally from an exploitation viewpoint, what effect has Six Sigma on a range of bottom line performance measures over time in addition to that of stock price (Goh, 2003)? What internal management performance measures can be improved in addition to defect reduction (Kwak and Anbari, 2006; Lee and Choi, 2006)? How can benefits from Six Sigma be more rigorously quantified beyond that of process yield measures, especially when it is used in a broad strategic context (Schroeder et al., 2008; Wiklund and Wiklund, 2002)?
|
Robert McNamara's "11 lessons" in the context of theories of strategic management
|
[
"Strategic planning",
"Ethics",
"Military actions",
"War"
] |
Summarize the following paper into structured abstract.
Introduction: A film produced in 2003 (The Fog of War) portrayed Robert McNamara's recollections of his life and the lessons that he identifies from them. There are 11 lessons in all, mainly drawn from his experiences as Secretary of Defense in the USA. Mr McNamara, however, has had a much wider experience and these lessons can be compared with theories of strategic management with which, as a Harvard MBA, he would have been familiar. This paper first considers the man's career, the issues of defence in the USA, then the 11 lessons and, finally, the context of management theory. The strategy formulation process in the US Department of Defence during the McNamara term of office is used here as the unit of analysis, and little attention is paid to the resulting strategies, since comparisons of the content of strategies in defence and business are difficult to make, given the differences in the two activities.The film is autobiographical and may have a hidden agenda, so there is the possibility of bias and self-justification in the testimony. Furthermore, at the end of the film, Mr McNamara declines to answer the question, "After you left the administration, why did you not speak out against the war in Vietnam?" which indicates that there is more evidence which could have been made available. In this paper, at key junctures, evidence from other sources is used in triangulation of Mr McNamara's recollections.
Mr McNamara: Robert McNamara was born in 1916 in California and graduated in 1937 from Berkeley, having majored in economics. His post-graduate studies at Harvard Business School were followed, after a short period with Price Waterhouse, by an appointment to the faculty to teach financial control. During the war, he served in the US Army Air Force and developed statistical systems for logistics and bombing analysis. Post-war he joined Ford Motors and rose rapidly through the management ranks until 1960, when he became the first President of the company not to be a member of the Ford family. In 1961, however, he accepted the invitation by the newly elected President of the USA, John F. Kennedy, to become the Secretary of Defense. He served in that position until 1968 when his increasing disaffection with the conduct of war in Vietnam led to his leaving the post. He then served as President of the World Bank from 1968 to 1981.
Staff processes in the Pentagon: Mr McNamara found on arrival at the Pentagon that the three arms of the forces, Navy, Army and Air Force, were largely autonomous in their budgeting, which led to duplication and waste. The National Security Act of 1947 had made the Joint Chiefs of Staff (JCS) system, begun during the war, a permanent feature and had created the post of Secretary of Defense to coordinate the activities of the three Services. In 1953, the Eisenhower administration introduced the Basic National Security Policy (BNSP) document which was reviewed annually, and which set out the national strategic concepts. General Taylor testifying to the sub-committee on National Policy Machinery in 1961 pointed out a major problem with BNSP:The paper was often ambiguous. For example, it would say we will depend upon these weapons of massive retaliation, but at the same time will maintain flexible mobile forces capable of coping with lesser situations in the world (Kaufmann, 1964, p. 24).A budget ceiling was introduced for defence spending, but military judgements were generally left unchallenged. Furthermore, the Chiefs of Staff had access to both the President and Congress, with a resulting potential for undermining the authority of the Secretary of Defense.The inter-Service rivalry was such that no coherent set of strategic plans was agreed, and an added complication was the strategic monism brought about by the nuclear arms race. Admiral Denfield had been a member of the JCS and is quoted as saying:... on nine-tenths of the matters that come before them, the Joint Chiefs reach agreement among themselves. Normally, the only disputes are on strategic concepts, the size and composition of forces, and budget matters (Kaufmann, 1964, p. 19).McNamara must have known of a predecessor in his post, Louis Johnson, who had been sacked for reducing Service expenditures and cancelling favourite projects of the military. But, after the Korean War, when Eisenhower was President, defence was consuming 13.5 per cent of the Gross National Product of the USA and economies were at least desirable. The military-industrial complex was a powerful force that was barely under control. Schlesinger (1965, p. 284) commented:The cause lay in the feebleness of civilian control of the military establishment; and this feebleness was the result in great part of the absence of rational understanding and hence of rational direction.The armed forces in preparing for future war expected that it would be an all-out nuclear exchange with the Soviet Union. Although vast sums of money were being spent on defence, there were critical shortages in the armed forces. McNamara was appalled to find on assuming the post of Secretary for Defense such shortcomings as:In the 1955-58 period there no less than four aircraft under development to perform the fighter mission - two in the Navy and two in the Air Force (Kaufmann, 1964, p. 30).McNamara found a total of fourteen Army divisions, of which only eleven were ready for combat ... if he sent 10,000 men to southeast Asia, he would deplete the strategic reserve and have virtually nothing left ... Equipment was so low that, when Kennedy inspected the 82nd Airborne at Fort Bragg in October, the division had to borrow men and material to bring itself up to complement ... The airlift capacity consisted largely of obsolescent aircraft designed for civilian transportation: it would have taken nearly two months to carry an infantry division and its equipment to southeast Asia (Schlesinger, 1965, p. 286-7).These, and other examples of the inadequacies of the armed forces, left the President with no flexibility in responding to Soviet aggression and McNamara was anxious to move away from planning for a war that the USA would have least liked to fight. His problem was how to bring about the necessary change in strategy and to reform an inadequate system of strategy formulation, which Huntington (1961, p. 146) had described, thus:... Strategic programs, like other major policies, are not the product of expert planners, who rationally determine the actions necessary to achieve desired goals. They are the result of controversy, negotiation, and bargaining among officials and groups with different interests and perspectives.Thus, there were many areas in the Defense Department that needed reform.
McNamara as Secretary of Defense: McNamara's term as Secretary of Defense was controversial. Schlesinger (1965, p. 286) records that McNamara in his first few weeks in the Pentagon exclaimed, "This place is a jungle - a jungle." Nonetheless, he quickly set about the task of reforming the strategy formulation process. He summed up his approach thus:I see my position here ... as being that of a leader, not a judge. I'm here to originate arguments and harmonize interests. Using deliberate analysis to force alternative programs to the surface, and then making explicit choices among them is fundamental (Kraft, 1961).McNamara was aware of the difficulties caused by the existence of separate departments in the Pentagon for each Service and chose not to create a unified military staff, but centralized power in the office of the Secretary. "He conceived his role as that of an "active manager", not as an umpire adjudicating the claims of the separate services" (Hendrickson, 1988, p. 39)McNamara set about this task using new techniques.Planning - programming - budgeting (PPB)
Strategy formulation: In common with all armed forces, strategy formulation under McNamara progressed under two modes.Routine
McNamara's strategy drivers: From the foregoing, it would seem that Mr McNamara's strategic thought in the Defense Department was driven by the following considerations:* The search for options. McNamara rejected the nuclear monism that dominated the military thought at the time of his appointment, and tried to create more options for the president and himself.* Civilian control. He was the first Secretary of Defense to control the military, as opposed to merely co-ordinating their activities.* Facts. McNamara insisted that all available facts, particularly numerical data, were assembled before a decision was taken.* Projects. New strategies came from, or were supported by, new equipment. On the other hand, the expense involved meant that any mistake or duplication of systems was very costly. Accordingly, projects came under heavy scrutiny.* Money. Money was the only common measure that could be applied. Budgeting became, therefore, the principal tool in the control of strategy development.* Efficiency. McNamara found duplication of resources, waste and needless expenditure when he entered office, and he strove to obtain better value for money in defence expenditure.Mr McNamara accumulated other experience after leaving the Department, so his 11 lessons, produced in his old age, cover a wider field than merely defence, although war and military matters dominate his account in the film. For instance, he does not use any examples in the film that are drawn from his time as President of the World Bank.
The 11 lessons: In the film, The Fog of War, Mr McNamara discusses 11 lessons that he has learned during his long life. The lessons are discussed in the context of military matters, but their message is extended here to refer to the conduct of business. The 11 lessons are:1. empathize with your enemy;2. rationality will not save us;3. there is something beyond oneself;4. maximise efficiency;5. proportionality should be a guideline in war;6. get the data;7. belief and seeing are both often wrong;8. be prepared to re-examine your reasoning;9. in order to do good you may have to engage in evil;10. never say never; and11. you cannot change human nature.
The lessons discussed: For the purpose of this discussion some of the lessons will be taken together, and not in the order presented in the film.Rationality will not save us and be prepared to re-examine your reasoning
Maximise efficiency: Efficiency is the ratio of output to input, usually expressed as a percentage. Both economics and game theory associate rationality in choice with maximising utility or value. This approach may be true in matters of choice, but McNamara was obsessed with efficiency and the rational use of facts. He now seems to acknowledge that measuring the wrong output or being deceived by one's own perceptions are dangerous pitfalls, and facts are not always what they seem. The problem is in knowing what to measure, particularly in the output. In Vietnam, the Americans were measuring the body count, which we have seen to be an ineffective means of measuring success in that war. McNamara seems mystified by the anomalies and uncertainties of life, but still believes that we should do the best we can to be efficient. He seems disappointed that the world is not as predictable as clockwork, and not as amenable to empiricism and rationality as he would wish.Proportionality should be a guideline in war and in order to do good you may have to engage in evil
Discussion: Mr McNamara was a manager, so his lessons can be seen in the context, not just of defence and grand strategy, but also in that of management theory. In trying to formulate his 11 lessons in management theory terms, however, the differences between the military and business need to be borne in mind. The military deal in death and destruction, and the scale of their activities, particularly in America, far exceeds those of any private firm, although, like business, they have to formulate strategy. In addition, most of the experiences related in the film are set in the 1950s and 1960s and so will have been informed by theories extant at that time. Theories such as Positioning, the Resource Based view, and Knowledge Management had yet to be proposed. During the 1950s and 1960s, the planning, rational approach to strategy formulation was in vogue, as was the belief in scientific management, and Mr McNamara was operating during that period. His doubts on rationality expressed in the film may reflect his exposure to later thinking in strategic management theory.Linking together some of the "Lessons" as in this paper reveals that they can be seen as a series of paradoxes. Do these paradoxes indicate inconsistencies in McNamara's thinking, or are we to take these opposites as indications of the ambiguity, uncertainty and complexity of management at the strategic level? deWit and Meyer's (2004) analysis of strategy was based on ten paradoxes, and the authors suggested that the really inspired solution may transcend the paradox and bridge between the two extremes to hold both of the two opposite perspectives simultaneously. If it proves impossible to have the best of both worlds in this way, it might be possible to trade off between the two opposites, or merely choose one or the other. The paradoxes in the 11 lessons usefully point up the dilemmas that McNamara faced during his life, particularly the moral ones.The study of strategic management has been informed by two majors streams of thought: economics and sociology. The first seeks "laws" from factual studies, whilst the second, although also espousing empiricism, takes a human-centred, societal view of strategic activity. Other disciplines, such as psychology, anthropology, and Darwinism, have also made significant contributions. The qualitative/quantitative tension is evident in the many papers written on strategy and its formulation. These tensions are present in the 11 lessons and McNamara points out that the rational and factual approach is insufficient, and account has to be taken of values, morals and the perversity of human nature. All the evidence of McNamara's approach to strategy, however, places him firmly in the "Rational Actor" category, but, in his lessons, he warns against a total dependence on this mode of thinking. Furthermore, in his view, what seem to be facts can be misleading, although that should not prevent considering all that are deemed relevant. Although the development of new weapons systems and a rational budgeting system led to a gradual emergence of a US defence strategy, McNamara chose a clear diversification aim to lessen dependence on nuclear weapons. Although at this time Ansoff (1965) was writing on the merits of a diversification strategy in business, we do not know who informed whom, nor indeed if the events are linked.Despite the advice he gives on revisiting one's thinking processes, the McNamara approach owes more to the "Design School" (Mintzberg, 1990) than theories of emergence or logical incrementalism (Quinn, 1974). Strategy formulation in the Pentagon was linked to the budgeting process and so took place at the same time each year as a formal procedure. Strategy formulation for specific events had to considered as they occurred, however, and these strategies were developed over time, as the events in Vietnam show. There was a conscious decision to escalate the conflict after the Maddox incident, and the "body count" arose from a change in strategy. So, whilst defence policy as a whole was planned, strategy for a particular conflict fits better into the logical incremental model.In practice, the Defense Department was not a free agent and much of the continuous strategy development fitted better into Allison and Zelikov's "Governmental Politics" model. Several Departments and individuals were stakeholders in the strategy outcome and so strategy was developed by negotiation and bargaining between powerful advocates. John F. Kennedy's decisions during the Cuban Missile Crisis were taken only after a free exchange of views by all concerned. Business strategists, too, need to be sure that they have sought a wide variety of opinion, not just those that are supportive. The Devil's advocate is a powerful stimulant to flexible thinking.McNamara is clearly concerned with ethical considerations and, given the scandals in some businesses in recent years, firms, too, need to espouse sound moral values. He seems to imply that the ends can justify the means, but is clearly troubled by the existence, and hence the possible use, of nuclear weapons. During the Cold War, it was argued that the world was a safer place for their existence through mutual deterrence, although that argument is difficult to maintain now in the face of proliferation. Should companies produce goods that can harm the users or others (tobacco, guns, drugs, etc.) justified on the basis that it is their use that causes problems, not the items themselves? More clearly, accounts should not be falsified (Enron), nor anti-trust collusion with competitors indulged (Sotheby's/Christies). McNamara demands moral standards, even though he acknowledges that human nature is flawed. If the latter is true, it is hardly safe to produce harmful goods, particularly nuclear weapons, nor indulge in illegal practices, even when they result in maximising returns.Tuchman (1984) illustrates the dangers of cognitive dissonance, and McNamara provides further evidence from the "Maddox" incident. We do not always draw correct inferences from what we see or hear. Much of the discussion in the early part of the Cuban Missile Crisis was devoted to puzzling out what the various moves meant and what was in the minds of the Soviet leadership (May and Zelikov, 2002). The correct evaluation of Khrushchev's position by Llewellyn Thompson saved the day, although he had to summon the courage to tell the President he was wrong. Things are not always what they seem and IBM went to the brink of disaster for the belief that the market needed just mainframe computers. This belief was comforting for IBM because they dominated that market and they wanted to believe this assessment. They were wrong and it took the IBM outsider, Louis Gerstner, to correct the blinkered thinking and the "say never" approach. Companies need to understand competitors' moves and to try and unravel their thinking and their motives. Only then can effective and proportionate counters be devised.Drucker (1977) offered the following definitions:* Effectiveness. The extent to which the desired result is realized.* Efficiency. Output divided by input, or the extent to which the result was produced at least cost (p. 561).The distinction is important, because doing the wrong thing really well avails us nothing. Both setting objectives and measuring particular output parameters have to be carefully considered. Killing lots of the enemy in Vietnam may have created the illusion of progress and of meeting President Johnson's directive quoted above, but this measurement was not an indicator of success. Subsequent events have shown that the USA would never have succeeded because their war aims did not fit the political environment at the time. Companies need to measure the right parameters, that is those that reflect the achievement or otherwise of the aims of the strategy. We measure what we can quantify, but intangibles are important too. Clausewitz recognized this distinction (Rapaport, 1968, p. 185):But now the activity in War is never directed solely against matter; it is always at the same time directed against the intelligent force which gives life to this matter, and to separate the two from each other is impossible.But the intelligent forces are only visible to the inner eye, and this is different in each person, and often different in the same person at different times.This distinction can be seen to pervade the 11 lessons, and McNamara seems exasperated at the inability of "facts" to yield answers without the addition of intangibles. This must be a bitter pill for one whose thinking seems rooted in the thinking of the "scientific" management theorists.
Conclusion: Mr McNamara discusses 11 lessons he has learned in his long life. A passionate believer in empiricism and rationality, he reluctantly acknowledges that these are not enough, although irrationality in strategy formulation would be clearly absurd. Strategy is an art and not a science, and morals, ethics, and human frailty are factors that bear heavily on such decisions. Decisions taken without the widest consultation of opinions and facts are, at the very least, suspect but more likely wrong. Analysis of the 11 Lessons again calls into question the validity of the "Rational Actor" theory of strategy formulation, and reveals the paradoxical, ambiguous and uncertain nature of the strategy process. One is left with the impression that McNamara finds the world of human endeavour deeply disappointing for its irrationality, absurdity and sheer inefficiency. The rational, empirical approach is not, therefore, in itself a sufficient method of strategy formulation.
|
Attitudes towards aging and older people's intentions to continue working: a Taiwanese study
|
[
"Elderly people",
"Attitudes towards aging",
"Subjective norm",
"Intention to continue working",
"Attitudes",
"Personal experiences",
"Taiwan"
] |
Summarize the following paper into structured abstract.
Introduction: Population aging is one of the most challenging issues of the twenty-first century, facing both developed and developing countries worldwide. In the developed world, there has already been a substantial amount of research on aging and work to help understand the capacity and potential of older people. There is an emerging view towards maintaining ability, developing potential, and continued competence (Ross, 2010). Older workers can compensate for a reduced ability to meet job demands by drawing upon experience and applying their resources in a more economic way. Thus, addressing the needs, wants, and well-being of older people is essential for maintaining a healthy productive workforce in an aging society. A recent series of review articles in Occupational Medicine (Ross, 2010) provides a valuable contribution to the knowledge of aging and work in the Western context. A special edition of Career Development International (Van de Heijden et al., 2008) presents some state-of-the-art contributions to research on the topic from a European perspective, as an alternative to the abundance of Anglo-Saxon studies in the literature. However, there is still a pressing need for cross-national and multidisciplinary approaches and for a higher involvement of scholars from other parts of the world outside the USA, UK and European continent in research on aging and work (Ross, 2010; Shultz and Adams, 2007; Van de Heijden et al., 2008). The present study examined a possible psychological process explaining the intention to continue working in a sample of Chinese older adults living in a developing economy (Taiwan, East Asia), thus answered the call by the above Western scholars.Though a similar trend of population aging is observed in Taiwan, the issue of employment of older workers has largely been overlooked. Adopting the United Nation's criterion, the proportion of those aged over 65 exceeded 7 percent in 1993 and further reached 9.7 percent in 2005. This trend will exacerbate when the post-civil war (1949) cohort enters older age in 2014. The official projected figure is 14.6 percent in 2018, and 20.6 percent in 2025 (Taiwan Census Bureau, 2006). A similar demographic change related to the aging of Boomer Generation and its resultant impact is observed in the Taiwanese setting, though a few years late as happened in the West, and set in a different historical context. The large-scale forced migration of soldiers and government employees (mainly young males) to the Taiwan island following the defeat of the Nationalist regime in the Chinese Civil War (1945-1949), has contributed to the rapid rise of the proportion of aging sector in Taiwan and its unique demographic profile (more older, single males). The Taiwanese setting is thus very different for an aging worker from that of the West: continue working may be an economic necessity rather than an individual career choice (Council of Labor Affairs, 1999). Lacking of adequate institutional welfare safety net further makes early retirement a luxury many Taiwanese older workers cannot afford (Lu, 2010a). Consequently, the knowledge on aging and work built on research conducted in the developed Western countries with well-established institutional welfare regimes needs to be reexamined in the Taiwanese setting.The realities of a rapidly aging society with a developing economy make the needs/desires of people to continue working in older age an increasingly important social issue. In fact, working in older age is not an exception in Taiwan: government figures showed that 31.6 percent of those aged 60-64 and 7.6 percent of those aged 65+ were working in 2006 (Taiwan Census Bureau, 2006). A recent nationwide study (Lu, 2010a) found even higher percentages (41.5 percent and 26.5 percent) of those working over 60 (60-64 and 65-69 respectively). However, the employment rate for Taiwanese workers aged 60-64 (33.49 percent) was still lower compared to that of developed countries (e.g. US: 50.9 percent), and our East Asian neighbors (e.g. Japan: 54.7 percent; Korea: 53.6 percent) (Wu, 2006). To compliment the emerging policy debate on encouraging continued employment and hiring of older workers in the hope of injecting more human resources to tackle the worsening problem of labor supply shortage in Taiwan, we purport that individual-level psychosocial factors should be taken into account. After all, decisions are made by each individual to stay in or quit the labor market when getting older. Our purpose of this study, therefore, was to explore some of these potential psychosocial factors that may affect the decision of continue to work in older age, specifically people's attitudes towards aging in general and their perceived social sanction to continue working.While older age may be defined in many ways, for instance, 65 as the internationally adopted marker used by the United Nations, or 60-65 as the statutory retirement age in different countries. One nationwide survey in Taiwan revealed that Taiwanese people generally regarded 60 as the defining age of being "old", not the official criterion of 65 (Lee, 1999). To better represent this culture-specific psychological reality, in the present study we defined "older workers" as those who were over 60 years of age.The employment plight of older adults in the West
Method: Samples and procedures
Results: Before testing the hypotheses, we computed Pearson correlations among all research variables. Table I reports correlation results along with scale means and standard deviations. Attitudes towards aging, subjective norm, and personal experiences all significantly correlated with the intention to continue working in older age. All relations were in the expected direction.Among control variables, sex, age, current employment status, and personal health correlated with the intention to continue working in older age. Other variables such as education, rank, employment prospects, spouse health, and family members needing care did not correlate with the dependent variable.We proceeded with testing the structural model using AMOS 5.0 applying the Maximum Likelihood technique. The initial model (all paths in Figure 1) was streamlined deleting non-significant paths as shown in Table I. Specifically, paths from education, rank, spouse health, and family members needing care to the intention to continue working in older age (shown as dotted arrows in Figure 1) were not included in the model testing, thus no path coefficient could be shown. We followed suggestions by Bentler (1990) and Raykov et al. (1991) regarding criteria for evaluating SEM models. Specifically, the fitness indices (GFI and AGFI) should be in the upper 0.90s, and residuals (RMSEA) need to be small (<0.08). Results for our modified model (kh2=42.38, df=16, GFI=0.96, GFI=0.92, RMSEA=0.07) showed a good fit. All path coefficients shown in Figure 1 were statistically significant. Thus, our three hypotheses were all supported.Regarding why they would continue working in older age, our participants gave as many reasons as they wanted. At the top of the list, it was "keeping intellectually active" (19.8 percent), followed by "keeping social contacts" (19.2 percent), "physically still able" (19.2 percent), "filling time" (18.2 percent), "economic gains" (12.80 percent), and "contributing expertise" (10.7 percent). The average age they would work for full-time jobs was 67.33 (SD=9.81), and that for part-time jobs was 72.40 (SD=9.93).
Discussion: From attitudes to intentions in aging and work
|
Apprehending mindsets in employee development
|
[
"Mindset",
"Performance",
"Motivation",
"Coaching"
] |
Summarize the following paper into structured abstract.
__NO_TITLE__: The psychological concept of mindset can inform HR practitioners and managers about important elements of employee personality with regard to goals, performance, motivation, and attitudes. While the concept of mindset has been studied extensively for the past 20 years, it has not received much attention in management or HR literature, perhaps owing to few people clearly recognizing the linkage between mindset behavior and employee functioning. What follows is information that expresses: a description of what mindset is and how it normally functions; how mindset behavior is found in employee behavior on a day-to-day basis; and how managers and HR practitioners can use mindset information, in general and with individuals, in coaching and mentoring employees to improve functioning and performance.
Importance of mindsets in the organization: Why it matters
The bases of individual mindset: How the concept functions
Two common forms of mindset: Fixed and growth
Some interesting features of mindsets: Growth mindsets offer attractive payoffs
Springing mindsets to life: Expressions of mindsets and possible reactions
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.