title
stringlengths 6
301
| keywords
listlengths 0
23
| messages
listlengths 2
2
|
|---|---|---|
Hospital healthcare service quality, patient satisfaction and loyalty: An investigation in context of private healthcare systems
|
[
"Pakistan",
"Loyalty",
"Customer satisfaction"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In healthcare industry, the emergent of global competition inclines the patients more curious and concerned regarding the provision of healthcare services. The increasing focus on health and the development of the senior citizen population have vigorously improved the health needs and transferral of healthier lifestyle within the general population. The new enlargement in the global competitive service environment has influenced the residential business, especially the healthcare services. In this way, the healthcare associations have begun to emphasise on the superior healthcare service quality, as day by day increased competition among hospitals induces customers to make a best choice in selecting any hospital. Service quality is the most vital key element regarding service suppliers to get upper hand, so it ought to be very much enhanced and well measured (Lee and Yom, 2007). At the same time, better healthcare quality than counterparts provides businesses with a chance to distinct themselves in competitive markets (Karatepe et al., 2005). At present, on account of expanded expectations for ordinary eases and higher customer's desires, it is obligatory for hospitals to give superior healthcare services to the patients and to fulfil their requirements (Padma et al., 2010).\n2. Literature review: 2.1 Healthcare service quality and patient loyalty\n3. Theoretical framework: Various studies have accomplished to determine a connection between healthcare services and patient satisfaction (Alhashem et al., 2011), healthcare service quality and behavioural intentions (Naidu, 2009). At the same time healthcare service quality and patient loyalty is mediated by patient's satisfaction while patient loyalty is active as end result (Shabbir et al., 2016). According to the undertaken research framework (Figure 1), the relationship flanked by healthcare service quality, patient satisfaction and patient loyalty has been inspected. Healthcare service quality aspects like physical Environment, customer-friendly environment, communication, privacy and safety and responsiveness act as antecedent to the patient loyalty. Patient satisfaction is mediating the link between healthcare service quality and patient loyalty while patient loyalty acts as end result.\n4. Research methodology: 4.1 Population frame\n5. Research findings: 5.1 Reliability, common method variance, and descriptive results\n6. Discussion: Current study examines the research question, i.e. the effect of healthcare service quality factors influence the patient's satisfaction that aides in constructing loyalty intentions in hospitals of Pakistan. As study aim was measuring the patients view with respect to the quality of healthcare service aspects like physical environment, customer-friendly environment, communication, privacy and safety and responsiveness as well as effect of perceptions on patients' satisfaction and loyalty intentions. Study is descriptive in nature and aim of the study was to explore the research questions and information was gathered from private hospitals to quantify the perceptions of patients whether patients are satisfied/loyal or not.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to explain the patients' views towards private healthcare service providers. The study focussed on hospital service quality and analysed the relative significance of quality measurements in anticipating the patients' satisfaction and loyalty. The mediating role of patient satisfaction is assessed between quality of hospital healthcare services and patient loyalty.\nDesign/methodology/approach: A total 611 patients (both indoor and outdoor) participated in a questionnaire survey from the six private hospitals of capital city, Islamabad, Pakistan. Data were analysed through descriptive statistics, common method variance, reliability, correlation and regression in order to investigate customer perceived service quality and how the quality of services stimulates loyalty intentions towards private service suppliers.\nFindings: Findings depict that private healthcare service providers are attempting to deliver well improved healthcare services to their customers. Results confirmed that better quality of healthcare services inclines to build satisfaction and loyalty among patients. The healthcare service quality aspects (i.e. physical environment, customer-friendly environment, responsiveness, communication, privacy and safety) are positively related with patient loyalty which is mediated through patient satisfaction.\nPractical implications: Findings will help the hospital managers to articulate effective strategies in order to ensure superior quality of healthcare services to patients. The study will induce hospital management to deliver attentions towards the quality of private healthcare service systems and improvements towards the deficient healthcare services. Furthermore, the study will present a clear picture of patient's behavioural attitudes; satisfaction and loyalty intentions towards the quality of healthcare services.\nOriginality/value: The study provides the views and perceptions of patients towards the quality of healthcare services. The healthcare service quality dimensions, i.e., physical environment, customer-friendly environment, responsiveness, communication, and privacy and safety were assessed. Hospital healthcare service quality was examined in order to find out its effect on patient satisfaction and patient loyalty.\n",
"role": "assistant"
}
] |
Improving the turnover intention of tourist hotel employees: Transformational leadership, leader-member exchange, and psychological contract breach
|
[
"Turnover intention",
"Psychological contract breach",
"Transformational leadership behaviors",
"Hotel front-line employees",
"Leader-member exchange (LMX)"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In recent years, the Taiwan government employed a series of policy tools to foster development in its tourism industry. The main goal of the government was to attract more international travelers to visit Taiwan and add it to their list of travel destinations. By easing regulatory rules and adopting supportive policies, the Taiwan government helped the Taiwanese tourism industry to enter a new era. In fact, from 2008 to 2013, the number of foreign visitors in Taiwan increased from 3.85 million to 8.02 million, while tourism revenues doubled from NT$187.1bn (US$6.1bn) to NT$374.8bn (US$12.2bn). Although the significant number of visitors spurred much development and generated tourist revenues, Taiwan's hospitality industry is still facing a labor shortage. According to the Ministry of Labor in Taiwan, over the past five years, the average labor shortage rate has been 3.724 in the hospitality industry, which is higher than the average labor shortage rate of 2.788 for all industries in Taiwan. Hence, several studies have highlighted the critical challenge regarding the turnover intention in the hospitality industry (Kim et al., 2015; Mohsin et al., 2015; Robinson et al., 2014; Yang et al., 2012). Such considerably high rates and the high turnover intention of front-line employees also increase the cost of human resource management in the tourism industry (Kim et al., 2015), particularly with regard to recruiting new employees and re-training them with the necessary knowledge and skills (Cannon et al., 2015; Dhar, 2015). Koyuncu et al. (2013) noted that front-line workers contributed to the hotel organization's performance and success and thus, researchers of hospitality studies should understand that leadership may have a critical role in the tourism and hospitality industry. Despite the findings of these aforementioned studies, due to the low pay, long work hours, seasonal tasks and the routing nature of many of the jobs, high turnover rates of front-line workers still exist in the tourism and hospitality industry. Therefore, for top executives in the Taiwan tourism industry, the primary goal is to find adequate managerial strategies that can improve and decrease the high turnover intention of front-line employees.\nTransformational leadership and leader-member exchange: The function of leaders is to engage followers and motivate them to achieve certain organizational goals (Burns, 1978; Lochan Dhar, 2016; Namasivayam et al., 2013; Qin et al., 2014; Tromp and Blomme, 2013). Transformational leadership has emerged as a major type of leadership behavior that can influence followers' attitudes, behaviors and outcomes, including organizational commitment, job satisfaction, intention to quit, performance and organizational citizenship behaviors (Bass, 1990; Gumusluoglu and Ilsev, 2009; Tsai et al., 2009; Judge and Piccolo, 2004; Nemanich and Keller, 2007; Podsakoff et al., 1990; Kirkpatrick and Locke, 1996). In addition, transformational leadership involves four major components that can motivate followers' attitudes and behaviors:\nLeader-member exchange and psychological contract breach: LMX theory argues that leaders can develop and form different social exchange relationships with different followers (Chen et al., 2007; Graen and Schiemann, 1978; Lee, 2005). High-quality LMX relationships are described by mutual trust, respect, commitment and obligation (Brower et al., 2000). Leaders in high-quality LMX relationships can successfully encourage followers to undertake more responsible behaviors than they otherwise would, where individuals in such relationships frequently interact with their leaders and receive assistance, encouragement and support. Hence, the followers take on more responsibilities and play a more important role in accomplishing work-related tasks (Dienesch and Liden, 1986; Erdogan and Enders, 2007). In contrast, low-quality LMX relationships are described by a formal set of rules, organizational role-defined employment contracts and economic exchange behaviors. Leaders in such relationships maintain a certain distance from their followers, which limits the amount of support and information exchange (Graen and Schiemann, 1978; Van Dyne et al., 2008). In other words, even though the organization does not fulfill its promises (at least temporarily), a high-quality LMX social relationship between the leaders and their followers makes individuals believe that the unfinished tasks may be fulfilled in the future. Accordingly, previous research has argued that LMX has an important influence on individuals' responses to a psychological contact breach (Collins, 2010). In this regard, high-quality LMX social relationships provide individuals with the support to cope with the psychological effects regarding their perspectives that their employer (organization) has failed to fulfill the established agreements (Dulac et al., 2008). Hence, according to the argument above, the following hypothesis is posited:\nPsychological contract breaches and turnover intention: A large volume of studies have noted that an individual's perceptions of psychological contract breach can lead to negative work outcomes, including lower affective commitment (Coyle-Shapiro and Kessler, 2000), lower job satisfaction (Zhao et al., 2007), higher intention to leave (Cullinane et al., 2006), lower task performance (Robinson and Rousseau, 1994) and lower organizational citizenship behaviors (Robinson and Morrison, 2000). Bal et al. (2008) noted that psychological contract breach may influence work attitudes such as trust, job satisfaction and commitment. They also noted that as an organization breaks the unwritten contract established by the two sides (employee and employer), the employee fails to deliver his/her obligations to the organization, thus decreasing job satisfaction, trust and commitment to the organization. Such conditions also limit the employees' willingness to stay in an organization. In other words, psychological contract breach enables the organizational failure to reciprocate employees' contributions to their organization, which will have negative employee outcomes such as a decreased sense of satisfaction, trust and commitment to the organization. Dulac et al.(2008) argued that a psychological contract breach can also lead to unfavorable individual reactions, including higher intention to leave, a distrust of the organization and lower organizational commitment. Extending the research of Bal et al. (2008) and Dulac et al. (2008), this study further explores the causal relationships between psychological contract breach and turnover intention. For instance, individuals who perceive that their employer (organization) has failed to fulfill a promised agreement will be less satisfied with their jobs, which would lead to lower job satisfaction. In fact, individuals who experience a psychological contract breach tend to decrease their contributions to their organization, which leads to lower work performance and a higher willingness to leave the organization. Therefore, based on the aforementioned argument, the following hypothesis is posited:\nThe mediated effect of leader-member exchange on psychological contract breach: This study links H1 and H2 and further argues that transformational leadership behaviors can lead to a lower degree of psychological contract breach when they develop high-quality LMX social relationships. We note that transformational leadership behaviors affect individuals' perceptions of psychological contract breach through their effects on LMX. Therefore, we predict that LMX will serve as a mediator in the relationship between transformational leadership behaviors and psychological contract breach. On the basis of the aforementioned argument, the following hypothesis is posited:\nThe mediated effect of psychological contract breach on turnover intention: This study links H2 and H3 and further argues that a high degree of LMX may assuage the perception of psychological contract breach and reduce turnover intention. In other words, LMX affects individuals' turnover intention through their effects on psychological contract breach. Therefore, we predict that psychological contract breach will mediate the relationship between LMX and turnover intention. On the basis of the aforementioned argument, the following hypothesis is posited:\nThe mediated roles of leader-member exchange and psychological contract breach on turnover intention: As mentioned earlier, previous research has noted that transformational leadership behaviors can influence followers' outcomes by transforming and changing their attitudes and behaviors. However, the direct relationship between transformational leadership behaviors and turnover intention has been less discussed from a theoretical and practical perspective as well as from a psychological contract perspective. In this study, by integrating LMX theory and psychological contract theory, it was noted that transformational leadership behaviors will develop high-quality LMX social relationships, which, in turn, weakens individuals' psychological contract breach perceptions and leads to lower turnover intention. Therefore, this study noted that transformational leadership behaviors decrease individuals' turnover intention through their effects on LMX and psychological contract breach. In other words, it is predicted that LMX and psychological contract breach will mediate the relationship between transformational leadership behaviors and turnover intention. Based on the aforementioned argument, the following hypothesis is posited:\nMethod: Sample and procedure\nQuestionnaire design and measurement: Transformational leadership\nResults: Descriptive analysis\nHypotheses testing: This study adopted SEM models to examine our hypothesized relationships. Table III depicts the results of ten SEM analyses. As shown in Model 1, transformational leadership had a significant and positive influence on LMX (b = 0.84, p < 0.001), thus providing support for H1. As shown in Model 2, LMX had a significant and negative impact on psychological contract breach (b = -0.49, p < 0.001), thus providing support for H2. Meanwhile, as shown in Model 3, psychological contract breach had a significant and positive influence on turnover intention (b = 0.81, p < 0.001), thus providing support for H3. Models 4 and 5 were developed to examine the mediating effects of LMX between transformational leadership and psychological contract breach, respectively, by following the analysis procedure from the prior studies (Baron and Kenny, 1986; Lin and Shih, 2008). In Model 4, we found that transformational leadership significantly related to psychological contract breach (-0.46, p < 0.001). However, in Model 5, after adding LMX as a mediator, we found that the direct effect of transformational leadership on psychological contract breach was reduced from -0.46 (p < 0.001) to -0.21 (p < 0.05), whereas the standardized indirect effect of transformational leadership on psychological contract breach was -0.26 (p < 0.001). This finding lends support to H4, which posits that LMX mediates the relationship between transformational leadership and psychological contract breach (partially mediation). Models 6 and 7 were designed to test the mediating influence of psychological contract breach between LMX and turnover intention. In Model 6, we found that LMX significantly related to turnover intention (-0.55, p < 0.001). However, in Model 7, after adding psychological contract breach as a mediator, we found that the standardized direct effect of LMX on turnover intention was reduced from - 0.55 (p < 0.001) to -0.22 (p < 0.05), whereas the standardized indirect effect of LMX on turnover intention was -0.35 (p < 0.001). The results showed that psychological contract breach mediates the relationship between LMX and turnover intention (partially mediation), thus supporting H5. Finally, Models 8, 9 and 10 were developed to examine the mediating effects of LMX and psychological contract breach between transformational leadership and turnover intention. In Model 8, transformational leadership was found to be significantly associated with turnover intention (-0.49, p < 0.001). However, in Model 9, after adding LMX and psychological contract breach as mediators, we found that the direct effect of transformational leadership on turnover intention was reduced from -0.49 (p < 0.001) to 0.17 (p < 0.05), whereas the standardized indirect effect of transformational leadership on turnover intention was -0.32 (p < 0.001). The results showed that LMX and psychological contract breach partially mediate the relationship between transformational leadership and turnover intention, thereby supporting H6. Meanwhile, in Model 10, we found that the direct effect of transformational leadership on turnover intention was reduced from -0.49 (p < 0.001) to 0.02 (p > 0.05), after adding additional paths from transformational leadership to psychological contract breach and from LMX to turnover intention. The results indicate that LMX and psychological contract breach fully mediate the relationship between transformational leadership and turnover intention. The standardized indirect effect of transformational leadership on turnover intention was -0.53 (p < 0.001), further supporting H6 (Figure 1, kh2 = 301.46, kh2/df = 3.08, CFI = 0.94, TLI = 0.92 and RMR = 0.05). While further analyzing the change in the beta coefficient representing the effect of transformational leadership on turnover intention in Models 8 and 9 and Models 8 and 10, we found that approximately 65.3 and 85.7 per cent of this relationship can be explained by the LMX and psychological contract breach mediators, respectively. Although some residual direct effects exist, the relationship between transformational leadership and turnover intention seems to be explained by the mediating effects of LMX and psychological contract breach. Therefore, our results suggest that LMX and psychological contract breach mediate the relationship between transformational leadership behaviors and turnover intention.\nDiscussion: Conclusions\n",
"role": "user"
},
{
"content": "Purpose: This study aims to explore the high turnover intention issue in Taiwan's tourist hotel industry. Due to a lack of empirical research regarding front-line employees' psychological contract breach perceptions in tourism literature, this study develops an integrated model to examine the causal relationship among transformational leadership behaviors, leader-member exchange (LMX), psychological contract breach and turnover intentions.\nDesign/methodology/approach: Data from the 226 frontline employees in Taiwan's tourist hotel industry were employed to examine the proposed hypotheses by using a series of structural equation modeling analyses.\nFindings: Statistic results revealed that transformational leadership behaviors influence LMX and LMX in turn influences psychological contract breach, which consequently leads to lower turnover intention.\nPractical implication: The results of this study suggest that hospitality organizations should recruit individuals who have the potential to exhibit transformational leadership skills, along with designing leadership training programs for middle- and high-level managers.\nOriginality/value: This study provides hospitality organization leaders with the necessary information to formulate a beneficial relationship with their front-line employees, which, in turn, weakens their perception of psychological contract breaches and reduces their willingness to leave the organization.\n",
"role": "assistant"
}
] |
Demystifying the leadership kaleidoscope into an integrated view
|
[
"Leadership development",
"Leadership",
"Integration",
"DAC ontology",
"Leadership assessment",
"Renewed Darwinian theory"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The Poet P.B. Shelly (1822)[1] while referring to love said \"one word is too often profaned\". An analogous word or theme within the social science research would be \"leadership.\" A plethora of research is available on leadership (Dinh et al., 2014; Glynn and Raffaelli, 2010; Hernandez et al., 2011). Consequently, leadership research is becoming more and more diverse, multi-faceted and multi-focused than at any time in recent decades (Gardener et al., 2010). Even if one sets aside the minor ones, the scholarly literature is currently distributed across major 49 middle-range focal leadership theories (Meuser et al., 2016). Leadership has been studied by focusing upon the leader as a person, leadership as a process, leadership as a position or leadership as something delivering results (Grint, 2010). In spite of extensive research, scholars are yet to gain unanimity on the sources of leadership influence, multiple interactions across different loci and mechanisms of leadership (Eberly et al., 2013).\nLiterature review approach: At the outset, we would like to clarify that our paper is not an exhaustive literature review on leadership. A simple search using the keyword \"leadership\" in the Google Scholar database itself yields more than three million published articles, commentaries and books. A detailed review of all the available articles would be beyond the scope of any article. Instead, we have used the concept matrix approach (Webster and Watson, 2002) to conduct our literature review and identify the essentials for developing an integrated theory of leadership.\nIntegrated theory of leadership - the journey so far: Metaphorically speaking, leadership research is like a massive river, with tributaries flowing in from various disciplines, namely, political science (Burns, 1978), complexity science (Uhl-Bien et al., 2007), sociology (Weber, 1946), psychology (Stogdill, 1974) and spirituality (Fry, 2003). A team of 40 researchers from varied backgrounds formed to create a general theory of leadership were able to agree only on the points on which there were disagreements (Wren, 2006). These differences concerned paradigmatic assumptions, definitions of leadership, sources of leadership influence and leadership processes. Bass (1990) has alluded to the plethora of definitions of leadership over the years like the focus on group processes, related to traits, dispositions, attributes related to influence or structure or leadership leading to achievement of goals or a combination of any of these attributes. Hunt (1991) recalls the story of a leadership researcher who in spite of spending considerable amount of time researching leadership felt as if he had just begun his journey. Leadership theories have attracted critical comments over many salient aspects. \"Leadership is an ideology designed to support the existing social order by providing both a rationale for dysfunction and a direction in which shift to be made\" (Gemmill and Oakley, 1992, p. 344). Pfeffer (1977) referred to leadership as a phenomenological construct where the leader is frequently attributed as the cause of social events. Barker (2001) pointed out that definitions of leadership address the nature of leader rather than the nature of leadership. This tendency to identify the leader as the main cause of collective performance has been called the \"leader attribution error\" (Hackman and Wageman, 2007).\nExploring the core of leadership phenomenon: One influential guideline for theorizing asks of the theorist to have clarity on what theory is not (Sutton and Staw, 1995). Along similar lines, identification of what leadership is and what leadership is not, is attempted in this paper for better understanding of the leadership phenomenon. Three fundamental aspects of leadership - commonality across leadership outcomes, processes and the influence sources - are now dealt with separately in subsequent sub-sections.\nSynthesizing seven pivots for developing integrated theory of leadership: As students of leadership, it is important for us to distinguish between what we can and cannot say with authority on the subject - that is the essential first step in developing a grand unifying theory of leadership(Bennis, 2007, p. 2).Leadership research should be designed in such a way to provide information relevant to the entire range of definitions, so that over time it will be possible to compare the utility of different conceptualizations and arrive at some consensus on the matter (Bass, 1990; Yukl, 2010). In the next segment, we synthesize our review and propose seven important pivots, namely, change-collective filter, leadership outcomes, organizational outcomes, leadership role, sources of leadership, leadership process and context (refer Figure 1). A widely cited advice on theory development suggests that a good theory ought to be clear on its boundary conditions (Bacharach, 1989). Further, a good theory needs to be comprehensive, precise and testable, parsimonious, and should provide applied and heuristic values (Cramer, 2013). We link our seven pivots and discuss how they satisfy these characteristics of a good theory. We believe that doing so strengthens our claim for the suitability of our approach to aid the development of an integrated theory of leadership.\nDiscussion: In this section, we discuss the research implications as well as the managerial implications of our developed model. As research implication, we first discuss usefulness of the synthesized definition and the seven pivots in helping to build an integrated theory of leadership. Using a hypothetical example for evaluation of the quality of the leadership process, though limited, we demonstrate the practical applicability of our endeavor. Thereafter, we discuss developing and evaluating effective leadership development programs in organizations as a managerial implication of our paper.\nConclusion: In this paper, we did not seek to provide a comprehensive review of leadership literature or a status report on extant leadership research. Our limited purpose was to present an overarching skeleton within which the various focal points of leadership research could be integrated. We used the concept matrix framework to organize and make sense of the reviewed literature. The integrated canvas of leadership research is built by attempting a macro analysis and at the same time trying to identify the commonality. Since we are going simultaneously into the width and the depth of the concept of leadership, this paper does not attempt to be a detailed literature review. However, as each aspect of our integrated canvas delved into, one can refer relevant leadership literature for each pivot may be explored separately. The current canvas could then be filled up and converted into an integrated leadership theory.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to argue for theoretical integration as a major step in consolidation of the vast leadership literature. It is an attempt to lay out a basic canvas that could be further used for building an integrated theory of leadership.\nDesign/methodology/approach: The concept matrix framework (Webster and Watson, 2002) has been used for analytical abstraction to organize the published research on leadership and distill certain commonalities. The paper finally proposes certain pivots for theoretical integration of the leadership literature.\nFindings: A thorough analysis of the leadership literature and the identification of commonalities within the various theories help the authors to identify the change and collective filter, outcomes of leadership, organizational outcomes, leadership role, sources of leadership, processes within the larger leadership process and the context as the pivots for building an integrated theory of leadership.\nResearch limitations/implications: While the paper is not a detailed literature review and lacks predictive power, it is a synthesis of the published literature. The paper proposes a descriptive model meant to provide a sound foundation for an integrated theory of leadership.\nPractical implications: The paper provides a framework to reduce the complexity and ambiguity of leadership research literature and could be utilized as a starting platform for an integrated theory of leadership. It is hoped that this shall also provide leaders and leadership training providers with a more holistic approach for leadership assessment and development.\nOriginality/value: Based on a survey of literature, the explanation of the leadership process, suggestions for evaluation of quality of leadership and pivots for theoretical integration are the main contributions.\n",
"role": "assistant"
}
] |
Call centers: is there an upside to the dissatisfied customer experience?
|
[
"Uncertainty",
"Loyalty",
"Control",
"Emotions",
"Customer",
"Call-center",
"Frustration"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: One of the most commonly experienced modern-day service encounters is between the customer and a call center. Call centers have become the new way to deliver services and play a crucial role as a primary customer interface for many organizations. Currently, they are seen as the first, sometimes the only, way for companies to interact with and develop their customer base (Mukherjee and Malhotra, 2009; Russell, 2008).\nCall center dilemma: costs vs loyalty: The drive toward operational and financial efficiency means that customers of inbound mass market centers should get their problems solved quickly and cheaply. In contrast, there are fewer opportunities for company-client interactions, reducing the scope for creating and maintaining relationships. Research has shown that building relationships creates loyal customers. In turn, loyalty leads to higher sales and profit margins (Gopalani and Shick, 2011).\nPerformance metrics do not mean customer satisfaction and loyalty: Customer satisfaction and loyalty are key issues in any service encounter and the call center service is by no means an exception. Major emphasis is thus placed on delivering service quality to ensure satisfaction. Call centers tend to concentrate on achieving efficiency and measuring it through operational metrics such as average time to answer, queue time, percentage of calls answered within target time, percentage problem resolution on first call and so on (Aksin et al., 2007). These metrics are used to indicate service quality with the assumption that if call centers get this right, customer satisfaction and loyalty will naturally follow. The focus has been on industry-driven operational metrics for measures of performance rather than customer-centered measures of perceptions, emotions and behavior. This is all the more surprising because research has clearly established that emotions are one of the major components of customer satisfaction (Elfenbein, 2007). Most customer-centered measurements of satisfaction ask how customers \"feel\" about the service, how \"happy\" they are and so on. In the call center industry, however, when customer feedback is solicited about the quality of the call, this has tended to focus on cognitive evaluations, for example, the waiting time, whether the operator provided the required information accurately and speedily (Dean, 2002).\nCustomer emotions are key in disembodied services: A call center service not only is intangible (like all services as opposed to products) but also takes intangibility one step further by eliminating visual and physical cues. In the \"intangible\" service delivered in a restaurant, for example, the customer relies on the physical appearance of the staff and the place to help assess service quality. Research has already found that customers find the emotional content of an intangible service important (Krampf et al., 2003).\nThe causes of customer negative emotions from particular to general.: Commonly, we think it is the waiting, the tired and impatient employee, getting the problem solved but academics agree - emotions depend on individual perceptions. For example, listening to music can under normal circumstances be appraised as a highly pleasant experience but not so while waiting on the line in the context of a call center encounter. What is it about these experiences that frustrate and anger customers? What will they go on to do, and how can emotions be managed?\nFocus group method: To investigate these questions about how customers experience frustration and anger in a call center encounter, as well as their impact on customer satisfaction and loyalty, 121 individuals from varying national backgrounds took part in 33 focus groups of three to five participants each. During a 30-min session, the participants each described and discussed a negative call center experience. Moderators followed a discussion guide to ensure that each participant narrated their experience and that the following discussion covered the entire call center encounter.\nCustomers can feel negative emotions before the call: Even before making the call to the call center, most participants started off with negative emotions. Approximately, a third felt no particular emotion, and only four of them said they felt good or happy. The reasons for these initial states generally came from previous negative experiences with call centers or their general knowledge of how call centers operate. These experiences affected their expectations and caused them to anticipate problems such as wasting their time, not being in control of the situation and not achieving their goals. The following extract from participants reporting initial negative emotions illustrates these issues:\nNegative emotions come from goal obstruction and uncertainty: During the call, the initial mixed emotions of happiness, neutral feelings, frustration, anxiety and anger in the sample seemed to converge largely into frustration. First, because of temporary goal obstruction and reduced certainty. For example, some participants explained that they become more uncertain about whether they are ever going to get through to the call center or the right person. In parallel, their goal, i.e. solving their problem, is momentarily blocked or seems further and further away, as in the case of the caller who stated:\nNegative emotions come from reduced customer control and attributions of employee/company responsibility: Issues of control and responsibility also dominated the discussions of what happened during the call. Tables I and II elaborate on these themes and show how the customers described them.\nProviding alternative paths to problem solving leads to positive outcomes: Some participants responded to frustration by searching for alternative means to their goals, for example, by looking for the information online or going to the physical service outlet to get what they wanted. They also sought support from their friends and family, both for information and for material help such as a relative phoning up or providing assistance. In contrast to those who predominantly reported anger, the frustrated respondents showed persistence in their behavior by continued engagement with the call center employees, ringing back and following up.\nPerspective-taking leads to positive outcomes: Loyalty resulted from positive emotions for 12 participants who had frustrating experiences. They said that even though the experience had been bad, they believed that the company was not to blame or that one experience or one call center employee is not necessarily representative of the whole service:\nNegative emotions lead to switching and withdrawal: The main behaviors in response to frustration and anger were verbal aggression toward the employee, negative word of mouth, switching behavior and withdrawal from the process by simply putting the phone down without solving the problem.\nBeware of spurious loyalty: One very interesting finding was that a majority of the participants intended to remain with their service provider or to use the call center service again despite their negative emotions during the experience. The reason most frequently proffered was that they had no choice or they believed that the service elsewhere would not be any better. We termed this \"spurious\" loyalty after Dick and Basu (1994) as opposed to \"loyalty\" where the respondents still retained affective loyalty to the service provider or to the call center. Representative of spurious loyalty is the caller who stated:\nRobot-like, disembodied experiences breed frustration: Before the call, emotions among the participants were varied. They felt angry, anxious, neutral or frustrated. What happened during the call was that a large number of the callers, whatever their initial state, became frustrated. The analysis showed that this came not only from perceived goal obstruction, which is the everyday trigger to frustration, but also from increasing uncertainty and lowered caller control, with varied attributions of responsibility. \"Automatic responses\" seemed to exacerbate frustration suggesting that un-empathetic \"robot-like\" responses may be in the interests of consistency, quality and efficiency from the company point of view, but by increasing uncertainty and reducing perceived control, create considerable consumer frustration. In addition, the fact that the customer cannot see or physically interact with the service provider means that they cannot use their social skills such as adapting their expressions to the other, empathizing, persuading and other tactics that are used in daily interpersonal interactions. These two factors, the robot-like nature and disembodiment of the interaction lead to reduced certainty and control which are the seeds of frustration experiences.\nThe three Rs: restore customer control, reduce customer uncertainty, reach customer goals: Customer emotions can be managed to increase loyalty and avoid switching if companies understand what underpins them. Managers can tackle the root causes by:\nConclusion: Call centers are rightly concentrating on improving service efficiency by reducing waiting time, aiming to solve customer problems rapidly and using technology to lower costs. The push toward efficiency, however, can lead to a \"dehumanized\" service to which customers respond with anger and frustration, leading to reduced loyalty and negative word of mouth. In the long run, permanent negative attitudes towards these services will form. Call centers are the future - if they can efficiently manage customer emotions.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to present the triggers of negative customer emotions during a call center encounter and the impact of emotions on satisfaction and loyalty. It suggests ways of mitigating the negative effects of such emotions.\nDesign/methodology/approach: This paper uses exploratory research consisting of 33 focus groups with 121 narratives of a call center encounter.\nFindings: Callers predominantly report frustration as the emotion arising from negative experiences in a call center encounter. Goal urgency, reduced customer control and uncertainty underlie this emotional experience. Triggers include assessments of \"dehumanized\", \"incompetent\" or \"hostile\" call center employees as well as the more well-known multiple transfers and waiting time. Customer may remain loyal after a frustrating encounter if they believe that alternative services will be no better.\nResearch limitations/implications: Disembodied service encounters generate conditions of reduced control and certainty which foster negative emotions. The outcomes of negative emotions are not always negative if the call center context is managed appropriately. Focus groups took place in a European business school, so generalizability of the results to other regions may be limited.\nPractical implications: Negative emotions can have a strong effect on loyalty, a key issue in service organizations. This paper provides insights into how to manage customer emotions effectively.\nOriginality/value: Customer satisfaction and loyalty in terms of emotions are generally overlooked in the call center industry because of the focus on performance metrics. This study shows that emotions must be taken into account to ensure customer retention and the competitive edge.\n",
"role": "assistant"
}
] |
Gender and leadership aspiration: the impact of organizational identification
|
[
"Gender",
"Organizational identification",
"Communal orientation",
"Leadership aspiration"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Today's labor market remains characterized by a disparity between men and women occupying leadership positions. Although the percentage of women with a board position within the major listed EU companies has risen from 12 percent in 2010 to 23 percent in 2016 (European Commission, 2016), women clearly remain a minority. There are various factors linked to lower female leadership participation (cf. Carli and Eagly, 2016; Eagly and Carli, 2007; Heilman, 2001; Hoyt and Murphy, 2016; Rudman and Glick, 2001; Vial et al., 2016), however, here the focus will be exclusively on lower female leadership aspiration (cf. Gregor and O'Brien, 2015; Hoobler et al., 2014; Savery, 1990) as constituting an important yet not well understood factor. Aspiration is an important precursor of career attainment (Schoon and Polek, 2011), occupational status (Schoon et al., 2007) and hierarchical advancement (Tharenou, 2001) and therefore assessing factors stimulating female leadership aspiration is of importance.\nLiterature review and hypotheses: Communal orientation, organizational identification, and leadership aspiration\nMethod: Procedure\nResults: Means, standard deviations, and intercorrelations for all variables as well as reliabilities (Cronbach's a) for leadership aspiration, organizational identification and communal orientation can be found in Table I.\nDiscussion: Barriers to women achieving leadership positions are well-documented (Carli and Eagly, 2016; Eagly and Carli, 2007; Rudman and Glick, 2001) and such barriers may presumably also discourage women's leadership aspiration. The evidence generally also suggests that women indeed are more communal than men - a factor also argued to invite biases in others' leadership perceptions in favor of male leadership (Eagly and Karau, 2002; Heilman, 2001; Lyness and Heilman, 2006). Importantly, however, the present study shows that communal orientation in and of itself does not discourage leadership aspiration. Indeed, with higher organizational identification it may stimulate leadership aspiration, and thus increase the leadership aspiration of women more than of men due to women's greater communal orientation. The moderated mediation evidence that women show higher leader aspiration than men, mediated by communal orientation, when they highly identify with the organization, speaks to how the psychological linkage with the organization may inspire leadership aspiration. These findings have some clear implications for theory and practice.\nConclusion: Almost ten years ago Eagly and Carli (2007) informed the labyrinth metaphor for women aspiring to get to the top. According to their findings, women do not face a barrier to leadership positions per se, but rather have to follow a circuitous, demanding and entwined path. As this metaphor is still valid today (Carli and Eagly, 2016; Kark and Eagly, 2010), the outlook toward a complex path to the top may discourage leadership aspiration among women. As such, understanding factors that may stimulate female leadership aspiration is crucial. The presented results depict an important step toward understanding how organizational identification and communal orientation interact and how they interact with women's leadership aspiration. Women express higher leadership aspiration, mediated by communal orientation, when they highly identify with the organization they work for. This speaks to the notion how the psychological linkage with the organization may in fact inspire leadership aspiration. These findings do emphasize interesting avenues for future research in terms of studying for instance the interplay between a communal organizational identity and organizational identification as well as different foci of identification and their respective impact on leadership aspiration. Likewise, relevant implications for practitioners, such as the importance of senior role models in displaying organizational identification and the significance of developing and implementing an encompassing HR strategy how to build organizational identification, are presented within this paper. These gained theoretical and practical insights should hopefully be beneficial for creating an organizational environment that stimulates rather than diminishes leadership aspiration among women.\n",
"role": "user"
},
{
"content": "Purpose: Although nowadays more women occupy leadership roles, they still are a minority. Because aspiration is a precursor of advancement, examining conditions fostering female leadership aspiration is important. A neglected perspective is the impact of organizational identification. Identification can be argued to foster leadership aspiration because the essence of leadership is the pursuit of collective interests, and identification motivates such pursuits. The paper aims to discuss these issues.\nDesign/methodology/approach: A cross-sectional survey design with an n=400 fulltime employed men and women, working for various organizations was selected.\nFindings: The initial prediction was that identification is more important to women's leadership aspiration to the extent that gender is associated with communal orientation, because women tend to have stronger communal orientation with associated greater affiliation needs, and organizational identification can be expected to cater to those needs. The communal orientation by organizational identification interactive influence on leadership aspiration was supported. Also, the indirect effect of gender on leadership aspiration via this interactive influence of communal orientation and organizational identification was supported.\nResearch limitations/implications: Due to the selected survey approach the data are correlational and as a result no reference to matters of causality can be made. Thus (field) experimental data is needed to confirm these findings.\nPractical implications: Within the paper the discussion focuses on the importance of creating an environment that is more conducive to organizational identification and as such speaks to the communal orientation - being more pronounced among women - to act in favor of the organization by aspiring leadership positions.\nOriginality/value: The presented results depict an important step toward understanding how organizational identification and communal orientation interact and how they interact with women's leadership aspiration.\n",
"role": "assistant"
}
] |
Improving attitudes toward brands with environmental associations: an experimental approach
|
[
"Labelling",
"Brand equity",
"Environmental management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\nIntroduction: The late 1960s and early 1970s saw the start of a growing interest in the environment and in the possible ways of improving it. Thus, and in spite of differences among countries, over 75 per cent of the citizens in the major developed and developing nations claim to be very, or relatively, concerned about the environmental situation[1]. In line with this, and bearing in mind that marketing is a discipline that is mainly guided by the economic, technological, political and social context (Sheth and Sidosia, 1999), the field of marketing has produced numerous studies since the start of the 1970s. These have mostly concentrated on analysing the effect of environmental concern on consumer decisions, given the varying levels of success that the different business strategies based on the protection of our natural surroundings have had.In an attempt to shed some light on the causes of the limited acceptance of environmentally-advantageous products, we have analysed the way in which consumers use environmental information in their purchase decisions. More specifically, the main objective of this study is to analyse the role environmental attributes play in the forming of brand attitudes. To do this, we have established an experiment in which 352 women responsible for the household shopping are exposed to different levels of environmental information. We analyse the effect said information has on product attitude and purchase intention.\nLiterature review: Knowledge about the power of brands in influencing purchase and consumer decisions is an aspect that is of great interest to marketing researchers and professionals. This power is embodied in the concept of brand equity. According to Aaker (1992), brand equity is formed by, among other aspects, perceived quality and brand associations. These associations are, in short, all those links created by the firm's efforts to communicate with the public (Van Osselaer and Janiszewski, 2001).Keller (1993, 1998b) divides brand associations into attributes, benefits and attitudes. Thus, if we place these classifications on an ascending scale, environmental associations can be considered firstly as attributes, in that the consumer uses these beliefs in the product's environmental performance to characterise the brand. These attributes can be considered as product-related (Keller, 1993) (environmentally-friendly chemical composition) or non-product-related (beliefs in the possibility of packaging recycling or in the overpricing of environmentally-marketed products).Moving up the scale, these associations can act as consumer-expected benefits if the individual believes that the product can help the environment. The environmental benefits attributed to brands could operate on the three levels described by Park et al. (1986). Thus, these benefits can be related to the good performance of the product (functional benefit) as regards environmental care, referring to safety motivations (Maslow, 1970) and aimed at avoiding or eliminating the problem (Fennell, 1978; Rossiter and Percy, 1987). Likewise, they could be related to the feelings and emotions the consumer experiences when he uses the brand (experiential benefit), such that the consumer's satisfaction increases on believing that, by purchasing the brand, he is contributing to social welfare. Finally, brand-attributed environmental benefits could be related to the needs of social approval or external personal expression (symbolic benefit); in other words, those individuals who consider themselves to be ecologists may consider not purchasing a brand if they feel that it does not adequately reflect their ideology.At the top of the scale, environmental associations can determine attitudes, these being the consumers' overall evaluation of the brand (Wilkie, 1986). The attitudes sum up beliefs regarding the attributes and the functional, experimental and symbolic benefits (Keller, 1993), as well as contributing toward the perception of brand quality (Zeithaml, 1988).Our main hypothesis is that brand associations can refer to the environmental performance of the brand. This is consistent with Keller (1998a) when he points out that the social benefits of the product could be included among the brand associations. Thus, they may help to reinforce the brand image, improve its credibility and promote the feeling of accomplishment or realisation on purchasing the product (Hoeffler and Keller, 2002).If this is the case, environmental associations should improve brand attitude (Kinnear and Taylor, 1973), thus contributing to an increase in brand equity (Henion, 1972; Louchran and Kangis, 1994).In short, associating ecological attributes with the brand can contribute toward brand differentiation (Gallarotti, 1995), which will be viable as long as the environmental advantages are able to justify higher prices and can be communicated to the public (Azzone and Bertele, 1994). Products with clear environmental advantages, accompanied by effective communication channels, will better survive price-induced competition attacks (Kassarjian, 1971).The above reasoning allows us to establish a prior hypothesis, derived from theoretical contributions and empirical studies. The former generically postulate a series of benefits for the brand and/or company arising from developing corporate societal marketing activities (Louchran and Kangis, 1994; Keller, 1998a; Hoeffler and Keller, 2002). Secondly, previous studies have already shown the direct link existing between ecological attributes and brand attitude (Kinnear and Taylor, 1973), or relating to the ecological attributes of market share (Henion, 1972). Therefore, we firstly examine the presence of a direct positive relationship between belief in the environmental performance of the brand and brand attitude:H1. The stronger the beliefs regarding the brand's environmental performance, the stronger is brand attitude.\nMethodology: Experimental design\nResults: Effectiveness of the experimental situation\nConclusions and managerial implications: The results obtained from the simple hypothesis tests reveal a direct positive relationship between the beliefs regarding the ecological performance of the brand and attitude toward said brand. Likewise, relative evidence has been found to support the hypothesis of the lower importance of the ecological attribute compared to other attributes. These results are, to some extent, incongruous with those obtained after testing an environmental information-processing model, which has revealed the lack of statistic relevance of the effect of such beliefs on brand attitude.This conclusion contradicts, therefore, some other empirical contributions in this field (e.g. Henion, 1972; Kinnear and Taylor, 1973), although the methodology and objectives of these two studies are very different to those established for this research. On the other hand, it is in keeping with those studies that have demonstrated the secondary role of these associations compared to others (Stisser, 1994; Niva et al., 1998).Nonetheless, these results do allow us to recommend the use of environmental associations (we should not forget the positive correlation existing between beliefs regarding the ecological performance of the brand and attitude toward said brand), though they do suggest that these associations never overshadow those others that the consumer considers to be principal in each product category.On another note, we have found that environmental labels help to strengthen beliefs regarding the environmental performance of the brand, as well as increasing confidence in the judgments made on said performance. This main contribution is in line with other studies that have suggested that environmental labels should have a positive influence on the consumer's evaluations of the environmental performance of the brands included in the research (Parkinson, 1975; Scammon and Mayer, 1993, 1995; Roe et al., 2001), though, as far as we know, this relationship had not been experimentally proved.In this sense, we believe there is a need to homogenise the environmental labelling programs, which will help to reduce the risk perceived in the evaluation of said performance. There should also be a greater effort made by public administrations to inform the citizen of the existence of such labels, of their meaning and of the methodology used in granting them. For all of this, a useful way forward would be to intensify the presence of commissions, promoted by international bodies, in which the scientific community, ecology groups, certifying organisations, firms and the consumers themselves would all have their role.\nFuture research: The research work presented here should, in the future, be extended, by analysing the convenience of implementing a single certifying system. This would be common to all product categories and, if useful, would contribute greatly toward communication with the general public. This possibility could also be replicated in other product categories, particularly durable goods, where involvement is normally higher. Another area could be that of services, in which it is more difficult to define a brand's environmental benefits.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this article is to determine the relative importance of the ecological attribute when other attributes referring to the functional performance of a brand are taken into account, and check the effectiveness of environmental labels.\nDesign/methodology/approach: - The paper establishes an experiment in which 352 women responsible for the household shopping are exposed to different levels of environmental information. The study analyses the effect said information has on product attitude and purchase intention. In order to transmit the information, a leaflet specifically designed for the research was used.\nFindings: - This study confirms the presence of a positive effect of environmental associations on brand attitude, though this effect is smaller than that of other functional attributes. It also demonstrates that using independent environmental certifications strengthens beliefs in the product's ecological performance.\nResearch limitations/implications: - The use of washing powder can limit the feasibility of extrapolation of the results to other products. Therefore, a replication in other product categories is necessary/advisable.\nPractical implications: - In the light of the results, using environmental associations certified by independent bodies is recommended. This would help improve both brand attitude and brand equity.\nOriginality/value: - This paper increases the knowledge about the precise commercial usefulness of environmental associations in relation with other attributes.\n",
"role": "assistant"
}
] |
The uncertainties of risk management: A field study on risk management internal audit practices in a Finnish municipality
|
[
"Public sector",
"Risk management",
"Auditing",
"Internal control",
"Actor-network theory",
"Finland"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In the wake of corporate scandals and failures that have occurred since the 1990s, frameworks and practices for managing organisational risks have proliferated in both the private and public sectors, while their scope has widened from the management of financial risks to the risk management of everything (Power, 2004). These significant developments have also inspired a wealth of research in various fields. For instance, audit scholars have illustrated how internal auditors have become to be portrayed as experts in risk management (e.g. Spira and Page, 2003; Power, 2007; Sarens et al., 2009) and how risk-oriented audit practices have been developed as a response to new demands (Curtis and Turley, 2007; Knechel, 2007; Robson et al., 2007). Management accounting researchers have lately focused especially on the broad notion of enterprise risk management (ERM), tracing the discourses and logics underlying its development (Power, 2004, 2007) or analysing various factors and processes that relate to the extent of its implementation (Liebenberg and Hoyt, 2003; Beasley et al., 2009; Woods, 2009; Arena et al., 2010).\n2. Researching the effects of risk management: 2.1 The extant literature\n3. Research methodology and methods: 3.1 Methods of data collection\n4. The uncertainties of municipal risk management: Following the ideas of Callon et al. (2009) concerning \"acting in an uncertain world\", we begin this section with a chronology of the key events to provide the reader with a concise view of how the study unfolded. This summary is followed by a prologue, which describes and characterises the initial processes of constructing risk management solutions up to 2002 when they first become controversial. Subsequently, we present our long-term exploration from 2002-2010. We examine two different situations of controversy where risk management solutions occupied a central position, thus analysing the uncertainties generated by risk management. We conclude the analysis with an epilogue focusing on the current status.\n5. Concluding discussion: In this article, we set out to study the effects produced by risk management in the long term, starting from its national inception in Finland in the mid-1980s and proceeding, in a local government context, via two significant episodes to the present day and state. We studied the process of how risk management solutions first emerged, how risk management was further calibrated and how risk management responsibilities were distributed throughout Case Municipality with the significant help of various technologies. We also showed that the instigator of risk management within Case Municipality was largely the Internal Auditing Unit (IAU), which took action when unexpected events occurred and risk management solutions came under pressure. Our paper contributes to three streams of literature: first, we add to the risk management literature by showing how uncertainties can be generated by risk management itself; secondly, we add to the knowledge on internal auditing by showing how internal auditors assume a central role in the development and assessment of risk management; thirdly, we develop Callon's work on framing by showing how different frames can relate to each other. In the following, we discuss each contribution in more detail.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to analyse the implementation of risk management as a tool for internal audit activities, focusing on unexpected effects or uncertainties generated during its application.\nDesign/methodology/approach: Public and confidential documents as well as semi-structured interviews are analysed through the lens of actor-network theory to identify the effects of risk management devices in a Finnish municipality.\nFindings: The authors found that risk management, rather than reducing uncertainty, itself created unexpected uncertainties that would otherwise not have emerged. These include uncertainties relating to legal aspects of risk management solutions, in particular the issue concerning which types of document are considered legally valid; uncertainties relating to the definition and operationalisation of risk management; and uncertainties relating to the resources available for expanding risk management. More generally, such uncertainties relate to the professional identities and responsibilities of operational managers as defined by the framing devices.\nOriginality/value: The paper offers three contributions to the extant literature: first, it shows how risk management itself produces uncertainties. Secondly, it shows how internal auditors can assume a central role in the risk management system. Thirdly, it develops Callon's framing/overflowing framework with the notion that multiple frames are linked and create unexpected dynamics, and applies it to the study on the effects of risk management tools in an internal audit context. It shows how, despite recurring attempts to refine risk management, further uncertainties are continuously produced, thus providing an empirical illustration of how reframing and overflowing intertwine in a continual process.\n",
"role": "assistant"
}
] |
Marketing curriculum preparedness for Thailand: Comparing research for marketing curriculums in colleges and universities for higher education institutions in Thailand
|
[
"International marketing",
"Sustainability",
"Education",
"Marketing in developing countries",
"Business education",
"Marketing competitiveness"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Higher education in Thailand is a growing concern for many investors and global economic developers. As global markets increase, the concern is to ensure that future generations of adult learners are adequately prepared for a combined community of economies. True to the concept of a collectivist society, the ASEAN Economic Community (AEC) represents one of the most lucrative opportunities for future global economic progress. This vital role is hampered by key deficiencies in the available constructive policies and procedures that can help to prepare future generations for global economies. More specifically, there is little or no research done to correlate specific areas of business development with the curriculum for young adult learners throughout the AEC in Thailand's adult learning institutions that target key tasks and functions of the corporate entity for the future, such as with marketing education. While research does exist that notates the need for attention and development of marketing programs and systems in Thailand correlating with the AEC, little is said as to what critical focuses exist for the cultivation of marketing curriculum for future business leaders. It is this realization that drives the intent of this report in order to examine the available research on factors and models identified in other parts of the world, specifically from other countries within the AEC, as well as the factors that concern AEC development.\nReview: From the perspective of applicability, the understanding of what the AEC represents to people who will live and work in this new economic powerhouse as a source of identifying how those people will incorporate it into their lives is what is of concern for the Southeast Asia-Pacific region. From the individual cultural standard, and from the view of the individuals living in those countries, the AEC is unidentifiable (Lozada, 2013; Robertson, 2006). For decades the people in the Asia-Pacific region have been pushing for the populace to become better prepared for the AEC, but yet in the year that the economic union is set to begin most people still consider themselves as a separate country with their own individual unique cultural identities (Lozada, 2013). Little attention is paid to the collective identity of the AEC, nor of such a notion's extreme importance to its long-term success. Thais still consider themselves Thai despite their widespread programs of awareness for the union's start-up (Igel and Numprasertchai, 2005; Lozada, 2013). Vietnamese still regulate their lives according to the tenants of communist doctrine regardless of its inapplicability to the stipulations of the AEC Charter (Igel and Numprasertchai, 2005; Ng, 2001). Filipinos still consider themselves closer aligned to Americans than they do to any other Asian culture (Lozada, 2013). Is it really any wonder that there is little or no preparation of educational curriculum realistically prepared for such a union? In fact, this very conundrum is the key factor in why there is virtually no existing research identifying the specific needs of developing a collegiate-level marketing curriculum that meets the demands of the forthcoming AEC in both practicality scope and Charter outline.\nResearch methodology: The pertinent research reviewed was obtained via online journal resources, as well as journal sources from the researcher's university of employment's institutional library that provided access to regional and international reports and academic articles. All reports and articles reviewed were from recognized journals pertaining to collegiate curriculum development, critique, and/or qualification. The research methodology is a quantitative focus on three different factors, but are not mutually exclusive since some research and reports resulted in multiple responses. Those aforementioned focuses included the following:\nFindings: Factor 1 (see Figure A1) findings analysis shows the highest focus for research development recognized on a national level for Thailand is a tie between Avi: the physical condition and resources of the learning environment and Avii: the support of the research and academics over social activities as an imperative. Both are standing at 23.5 percent of the sampled reports and articles, respectively. The second strongest focus was for Ai: teamwork learning skills in adult education at 19.6 percent. The need for Av: services for helping to develop and enhance community infrastructure and profitability showed only 1.96 percent of the sample. The future of marketing curriculum development focuses on scholarly excellence, not on the impact to the social environment.\nDiscussion: Obviously, there is recognition throughout Thailand that scholastic pursuits are of extreme importance for Thailand's future considering the poor performance in the World Economic Forum (WEF) annual testing ranking near the bottom of the original \"ASEAN - 8\" (Chinnawongs et al., 2006; Ng, 2001; Pimpa, 2009; Richmond, 2007; Sangnapaboworn, 2003), and even ranking below Vietnam and Cambodia (Schwab, 2012). Thailand's performance in the next generation's academic skills is also low in the Program for International Student Assessment (PISA) study, where out of 65 countries Thailand came in 50th place in mathematics, 48th place in science, and 47th place in reading (OECD, 2014).\nConclusion: Once there is a better understanding of the differences between community development productivity and community service, the educational institution's focus on academic excellence will generate advertising and public relations material that promotes the institution's own corporate social responsibility, and thereby increase the marketing development within that very institution. The focus should be on programs the promote the use of technology for recycling in the community, promoting ethical labor practices for local employers, and employee engagement programs that ensure each member of the company has an adequate home life that increases overall productivity. This will have tremendous rebounds in the development of the institution's marketing program that can be instilled in the learning curriculum where the institution itself becomes not only a perfect example of the necessary physical condition and resources of the learning environment that promotes marketing principles, and ongoing research and academics that will then incorporate those social activities promoting more CSR without conflicting with scholarly development imperatives. This is, essentially, exactly how the Thai adult learning institution can focus their efforts of increasing academic excellence and raise in the standings of the WEF and PISA results.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to identify the key areas of marketing research inefficiencies for marketing education development in Thai colleges and universities so that possibilities of future research development can be encouraged and enhanced. It is a direct reflection of the drivers of marketing in the ASEAN Economic Community (AEC) cross-correlated with issues that affect the development of adult learner curriculums with regard to the level of preparedness of future marketing leaders.\nDesign/methodology/approach: This is a quantitative focus on three different factors, but is not mutually exclusive since some research and reports resulted in multiple responses. Frequency distribution is used with both single and multiple answers.\nFindings: The data reveals focuses on scholarly excellence from a nationalistic Thai cultural perspective that is devoid of impact to the future Thai social environment.\nResearch limitations/implications: The limitations are linear showing a unified vision across Thailand's educational development with little variations in concept and outcome, using only Thai sensibilities as the guidepost for progression.\nPractical implications: The benefit garnered from this type of study will be useful for corporate entities looking for trained and prepared marketing leaders of the future in the country of Thailand and throughout the AEC.\nOriginality/value: The scope is unique with very little likewise research previously conducted, focusing on a more effective model can be seen for Thailand's future marketing curriculum development.\n",
"role": "assistant"
}
] |
Motivation and knowledge sharing: a meta-analysis of main and moderating effects
|
[
"Motivation",
"Meta-analysis",
"Knowledge sharing",
"Intrinsic and extrinsic motivators",
"Individual-level",
"Organisational-level"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In the knowledge-based view of a firm, knowledge has been identified as the most strategically important resource and a principal source of value creation (Teece and Al-Aali, 2011). Valuable knowledge resides in people, and people can share knowledge to transfer it to different individuals and groups, and from one generation to another. A large body of literature has investigated the antecedents of knowledge sharing behavior to encourage individuals to participate in the knowledge sharing process (Kumari and Takahashi, 2014; Nguyen et al., 2019; Witherspoon et al., 2013). Prior research concluded that an individual's motivation plays a key role in enabling knowledge sharing (Kankanhalli et al., 2005; Osterloh and Frey, 2000; Wasko and Faraj, 2005).\n2. Theory and hypotheses: Motivation has been considered a key determinant of knowledge sharing (Osterloh and Frey, 2000). In other words, motivation is the force that shapes the desires of individuals to share knowledge (Tang et al., 2016). Knowledge sharing motivation is usually divided into two categories, extrinsic and intrinsic (Abuhamdeh and Csikszentmihalyi, 2009; Amabile, 1993; Gong et al., 2017; Malka and Chatman, 2003). Extrinsic motivation comes from some expected consequences or a goal-driven reason when performing an activity (Osterloh and Frey, 2000). Intrinsic motivation in knowledge sharing, on the other hand, implies that individuals find the activity in and of itself interesting, enjoyable, effective and stimulating (Foss et al., 2009).\n3. Methodology: 3.1 Methodological approach\n4. Results: 4.1 Descriptive statistics\n5. Discussion: Our meta-analytical results from 44 studies involving 14,023 participants confirms previous studies (Chang and Chuang, 2011; Lin, 2007; Wasko and Faraj, 2005) showing that both intrinsic and extrinsic motivators are positively related with knowledge sharing behavior. The effect sizes are of substantial size ranging from r = 0.245 to 0.437. This finding is not surprising for intrinsic motivation, as knowledge sharing is often considered as a voluntary behavior, such that an individual does not normally share valuable knowledge without any intrinsic motivation (Tang et al., 2016) and the active participation and cooperation of the knower (Nonaka and Takeuchi, 1995). The result of extrinsic motivation is noteworthy because extrinsic motivation has shown an inconsistent effect on knowledge sharing: positive (Kankanhalli et al., 2005), insignificant (Lin, 2007) and even negative (Bock et al., 2005). This study shows that although the impact of extrinsic motivation on knowledge sharing was mixed, the overall impact across the studies is positive.\n6. Conclusion: This study provides a more complete understanding of the role of motivation in increasing knowledge sharing. The findings show that both intrinsic and extrinsic motivation significantly influence knowledge sharing behavior and that intrinsic motivation has a stronger effect and is amplified by extrinsic motivation. Our theoretical reasoning and empirical results revealed that the effect of motivation on knowledge sharing is moderated by individual demographics, organizational contexts and cultural contexts.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this study is to investigate the effects of intrinsic and extrinsic motivation on knowledge sharing and the moderating effects of individual demographics, organizational context and cultural context in that relationship.\nDesign/methodology/approach: This study conducted a meta-analysis of 44 studies involving 14,023 participants to examine the direct and moderating effects of motivation on knowledge sharing.\nFindings: Results revealed that both extrinsic and intrinsic motivational factors were associated with higher levels of knowledge sharing, while the effect was stronger for intrinsic motivation. Moreover, results revealed that substantial variance was explained by moderating variables. Further investigation revealed that individual characteristics (age, gender), organizational context (organizational setting vs. open system, IT infrastructure) and cultural context (collectivism, uncertainty avoidance, performance orientation, power distance) moderated the motivation and knowledge sharing relationship.\nResearch limitations/implications: As a meta-analysis, this study is confined to variables that have been frequently analyzed in prior research. Future research could further increase our understanding of different types of knowledge sharing and various boundary conditions.\nPractical implications: Organizations should provide customized incentive systems to specific target groups to align motivation and knowledge sharing. Multinational organizations may consider different motivation schemes across countries to better suit cultural differences.\nOriginality/value: Despite a growing number of studies highlighting the important role of motivation in predicting knowledge sharing, the evidence is mixed. Based on a meta-analysis, this study identified true relationships and identified moderating effects that help explain prior mixed results.\n",
"role": "assistant"
}
] |
How selection attributes affect golfers' satisfaction: a meta-analysis
|
[
"Golf",
"Meta-analysis",
"Systematic review",
"Consumer satisfaction",
"Selection attributes"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Per a report by the Korea Golf Course Business Association (KGCGA, 2017), golf tourism in South Korea is estimated to be a USD 9.2bn industry, with a ripple effect of around USD 16.2bn when considering its impact on affiliated industries. With increasingly more Korean players winning on the global stage, golf is enjoying continued popularity as an attractive participatory sport with lucrative sponsorship opportunities for commercial partners (Jang, 2015; Jung and Jang, 2014; KGCGA, 2017; Lee, Kim, Ko and Sagas, 2011; Park, 2016).\nMethod: Meta-analyses allow researchers to systematically review and analyze critical findings from published research on a topic of interest (Borenstein et al., 2009; Kim and Cruz, 2016), thereby providing quantitative summaries of the literature. Behavioral scientists widely rely on this method (Hunter and Schmidt, 2004). Before a meta-analysis, researchers must search and preliminarily verify the relevance of published research of the highest scientific standards (Higgins et al., 2003; Hunter and Schmidt, 2000).\nResults: This meta-analysis included 100 single-coded ESs from 19 empirical studies and 32,039 samples (e.g. golfers, visitors). The Q statistic and I2 index were calculated to evaluate heterogeneity in the overall ES of the relationship between selection attributes and satisfaction with golfing facilities. Assessing heterogeneity is a crucial step in the meta-analysis process, and it helps determine the variability of the data for further meta-analyses (Higgins et al., 2003; Higgins and Thompson, 2002; Hunter and Schmidt, 2000; Kim and Cruz, 2016; Smith et al., 1995). Specifically, the I2 index is the preferred measure to quantitatively verify the dispersion of ESs in meta-analyses when studies utilize various measures of heterogeneity assessment. I2 index values of 25, 50 and 75 percent are interpreted as having small, moderate and large amounts of heterogeneity levels (Higgins et al., 2003). Both metrics indicated the existence of comparable heterogeneity throughout the ES indices, with comparatively high heterogeneity results (all I2 indices exceed 75 percent as described in Table II) as suggested by Higgins and Thompson's (2002) criteria.\nDiscussion and conclusion: Both outdoor golf facilities and screen golf facilities are facing a rapid change in market conditions due to external factors in the golf industry in South Korea (e.g. economic situation, the ever-changing demands of golfers, the popularity of golf itself and so on). Unlike outdoor golf course facilities, the demand for screen golf facilities seems to be steadily growing because of its unique functions and advantages. In the modern sport industry, sports relying on advanced technology (i.e. augmented or virtual reality) are continually being introduced. Such technology is steadily changing consumers' involvement in sports and might eventually become the primary medium for sports participation.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to summarize the results of existing studies that addressed the relationship between the selection attributes of golf businesses and golfers' satisfaction in South Korea.\nDesign/methodology/approach: Studies on the relationship between selection attributes and satisfaction per golf facility type were retrieved from the Korean Citation Index. Using Comprehensive Meta-Analysis version 2, the effect sizes (ESs) for the following relationships were calculated: between selection attributes and satisfaction with golf facilities in general; between selection attributes and satisfaction per distinct golf facility types (outdoor golf courses vs screen golf facilities); and between the sub-factors of selection attributes and satisfaction per golf facility.\nFindings: Medium ESs were found between selection attributes and satisfaction with golf facilities in general (0.394), outdoor golf courses (0.336) and screen golf facilities (0.370). The choice attribute factors of accessibility, employees and concession had large effects on golfers' satisfaction with outdoor golf courses, while services and concession had large effects on golfers' satisfaction with screen golf facilities.\nOriginality/value: Among many Asian countries, golf is extremely popular and attracts more business than other sports; however, changes in consumer viewpoints and business trends necessitate consistent refinement. This meta-analytic approach offered a macro perspective on how golf facility administrators might sustain and develop their businesses by highlighting what choice attributes matter most to golf consumers.\n",
"role": "assistant"
}
] |
Gatwick employees get passport to a slimmer and fitter life-style: Health and well-being take off at airport
|
[
"Employee well-being",
"Health",
"Airports"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Security employees at London's Gatwick airport became slimmer and fitter following a 12-week health program.\nBetter sleep quality: One participant diagnosed with type 2 diabetes successfully lowered his fasting glucose level to within the normal range. Employees also reported feeling happier, had better sleep quality, more confidence in what they could achieve and more satisfaction with exercise and leisure activities.\n\"Grab and fly\" bags: Airport staff can now buy \"grab and fly\" bags containing a pot of dried oats and milk powder, fresh fruit and mixed dried fruits and nuts. \"Cabin crew can easily add hot water to the porridge in flight and can nibble on the healthy snacks later, instead of reaching for the chocolate\", enthused Amanda Ursell.\nOpportunity for employers everywhere: Dr Carrie Ruxton, who collaborated over the passport to health program design, analysed data and co-authored the project report, explained: \"We spend most of our waking hours at work so workplace interventions really do have the capacity to make a big difference to the health and well-being of employees. The program is a ringing endorsement of how an employer can positively influence employees' approach to health and promote their personal well-being. More employers should follow this lead.\"\n",
"role": "user"
},
{
"content": "Purpose: Describes two health and well-being programs that are helping employees at London's Gatwick airport to get fitter and enjoy a healthier lifestyle.\nDesign/methodology/approach: Explores the reasons for the programs the form they take and the results they have achieved.\nFindings: Explains how security employees became slimmer and fitter following a 12-week health program, while the ongoing passport to health program is promoting healthier eating and lifestyles among employees.\nPractical implications: Advances the view that improved health and well-being in employees reduces absenteeism and improves productivity, mood and self-esteem in the workforce.\nSocial implications: Reports that Gatwick employees feel happier, have better sleep quality, more confidence in what they can achieve and more satisfaction with exercise and leisure activities.\nOriginality/value: Highlights how other employees could benefit from similar schemes.\n",
"role": "assistant"
}
] |
Climate adaptation planning in the higher education sector
|
[
"Universities",
"Adaptation",
"Resilience",
"Survey"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: It is now widely recognised that urgent action is needed to both limit climate change (mitigation) and to prepare for changes to our climate that are unavoidable or already underway (adaptation). To date, the majority of policy attention has been focused on mitigation, predominantly through action to reduce greenhouse gas (GHG) emissions. However, climate adaptation has become increasingly prevalent in research, policy and practice (Fussel, 2007; Laukkonen et al., 2009; Dyer and Andrews, 2011; IPCC, 2014). Given the importance of local context for adaptation planning - a product of differing exposure and sensitivity to climate-related hazards, variations in projected climate futures, as well as levels of adaptive capacity - responses are commonly championed and implemented at the local level (McEvoy and Funfgeld, 2013; Commonwealth of Australia, 2015).\nResults: Climate change mitigation\nLessons learned: Respondents had the opportunity to comment on any lessons learned in their institutional planning or assessment process. Items that were considered as drivers or facilitating factors for formally responding to climate change challenges were: commitment to Green Star buildings (voluntary sustainability rating system of the GBCA for buildings in Australia); recognition of the value of sustainability and assigning it a representative committee within the Senior Management Team; engagement with a wide range of groups who have specific insight into the operation of the campus and engagement with the community; utilisation of expertise of government scientists and development and review of the institution's EMS. In addition, leadership (both academically and operationally) was considered vitally important, as well as the showcasing of best practice. For one university, the increasing unreliability of coal supply required by university boilers was considered a driving factor. One respondent also commented that measuring exposure to risks will help institutions prepare for future [climate-related] risks.\nDiscussion: The survey findings supported the two key hypotheses that were proposed at the outset of this research: first, that action to mitigate climate change is more prevalent in the tertiary sector than planning for adapting to future change; and second, that overall climate change adaptation planning is formative at best and lacking in stakeholder awareness and motivation. Although a quarter of the 45 institutions have established stand-alone climate change mitigation plans, only five of them had (or were developing) similar plans for adaptation. Although the results also indicate that almost a third of the institutions participating in this survey can be considered to have taken specific steps towards analysing or adapting to climate change impacts, it is noted that this is by no means representative of the sector at large, with recruitment for the survey being targeted at those institutions already engaged in sustainability and climate planning. The survey also indicates that very few tertiary sector institutions are planning for climate change impacts in a structured manner (i.e. by following a standardised approach or methodology).\nConclusions: This paper aimed to increase the understanding of how higher education sector contributes to local-level climate adaptation planning processes. It did so by examining and benchmarking the sector's organisational planning for climate change, focusing especially on the assessment of climate change impacts and adaptation plans. The study indicates that there are still very few tertiary institutions that plan for climate-related risks in a systematic way. It also sheds light on the barriers HEIs face that prevent them from engaging more fully in climate adaptation planning and action. To overcome these barriers, it is recommended that HEIs, as well as their peak representative bodies:\n",
"role": "user"
},
{
"content": "Purpose: There is a growing interest in climate change action in the higher education sector. Higher education institutions (HEIs) play an important role as property owners, employers, education and research hubs as well as leaders of societal transformations. The purpose of this paper was therefore to benchmark how universities globally are addressing climate risks.\nDesign/methodology/approach: An international survey was conducted to benchmark the sector's organisational planning for climate change and to better understand how the higher education sector contributes to local-level climate adaptation planning processes. The international survey focused especially on the assessment of climate change impacts and adaptation plans.\nFindings: Based on the responses of 45 HEIs located in six different countries on three continents, the study found that there are still very few tertiary institutions that plan for climate-related risks in a systematic way.\nOriginality/value: The paper sheds light on the barriers HEIs face in engaging in climate adaptation planning and action. Some of the actions to overcome such hindering factors include integrating climate adaptation in existing risk management and sustainability planning processes, using the internal academic expertise and curriculum to assist the mapping of climate change impacts and collaborating with external actors to guarantee the necessary resources. The higher education sector can act as a leader in building institutional resilience at the local scale.\n",
"role": "assistant"
}
] |
A verbal rating scale to measure Japanese consumers' perceptions of product quality
|
[
"Ethnocentrism",
"Japan",
"Consumer behaviour",
"Behaviourally‐anchored rating scales",
"Individual perception"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Measuring peoples' opinions or attitudes on questionnaires using measurement scales is a common practice in all fields of management. To facilitate survey respondents' evaluation task, researchers sometimes use rating scales with verbal descriptions of the scale anchors, referred to as Thurstone or verbal rating scales. For example, stakeholders might evaluate the trustworthiness of a firm's press announcements according to gradually differing expressions such as \"sure and certain,\" \"quite true,\" \"almost reliable,\" \"rather implausible,\" and \"false.\" Verbal rating scales offer several advantages for attitude measurement, in that they are convenient to use and help reduce questionnaire length. In some cases, they may even improve measurement validity, according to the results of recent psychometric studies. However, the advantages of Thurstone scales frequently get overlooked, and the scale format rarely is employed in current literature. Therefore, this research attempts to shed light on Thurstone scales as a viable alternative to attitude measures. To this end, we discuss the advantages and theoretical differences of a Thurstone, as opposed to the dominant Likert, approach to attitude scaling. Furthermore, we illustrate our Thurstone scale development procedure, which results in a new verbal rating scale for measuring Japanese consumers' perceptions of product quality. The new scale achieves appropriate psychometric qualities for interval-level measurement; we also demonstrate the reliability and nomological validity of the scale.Customer-perceived quality represents a key determinant of sustainable business success. A firm's product or service quality, relative to that of its competitors, is \"the most important single factor affecting a business unit's performance\" (Buzzell and Gale, 1987, p. 7). Higher quality perceptions lead, in the short run, to increased profits due to premium prices and, in the long run, to effective business growth, involving both market expansion and market share gains. More specifically, quality and reliability have long been considered integral to Japan's industry and production processes, as illustrated by the slogans for Sharp Electronics, \"Quality First in Heart and Mind,\" and Fujitsu, \"Quality built-in, with cost and performance as prime considerations.\" The focus of the Japanese Kaizen philosophy is to improve quality constantly and recognize the possibility of changing all functions of a business, from manufacturing to management, from the CEO to the assembly line workers (Masaaki, 1986). Kaoru Ishikawa, one of the originators of total quality control management in Japan, outlined as early as 1968 that \"quality comes first, not short-term profits\" (Pecht and Boulton, 1995, p. 116). Customer-driven quality remains one of the seven core criteria required by the Japan Quality Award (see www.jqac.com).In this context, we review the Likert approach to attitude measurement, then discuss its advantages, as well as those of the Thurstone approach. To illustrate the Thurstone scale development procedure, we conduct a series of three studies that feature item generation, scale development, and scale validation. Finally, we conclude with some theoretical and managerial implications.\n2. Likert and Thurstone approaches to attitude measurement: Researchers can choose from a myriad of scale formats, such as paired comparisons, graphical, semantic differentials, or Stapel scales to measure consumers' attitudes. An early distinction contrasts Thurstone's (1928) and Likert's (1932) approaches to attitude scaling. In Thurstone's terminology, an item refers to a verbal qualifier or the verbal description of a scale anchor, such as \"excellent\" or \"mediocre.\" Thurstone's (1927) law of comparative judgment entails a general theoretical model that uses empirical data about item comparisons to develop measurement scales. Thurstonian approaches to scale development include equal appearing intervals, successive intervals, and paired comparisons (Thurstone, 1954). All the methods require the scale developers to select a large number of appropriate items that span the entire attitudinal continuum they wish to measure. Respondents then evaluate the candidate items to indicate the item position on this continuum. The procedures continue to the calibration phase, which involves the choice of an appropriate subset of items as verbal scale anchors, according to their psycho-numerical characteristics. Figure 1 depicts calibrated quality scales for the USA (Myers and Warner, 1968) and France (Angelmar and Pras, 1978). Our study extends this literature with a Japanese verbal rating scale for quality measurement.Likert (1932) critiqued Thurstonian scale construction methods as laborious and promised simpler development processes. On a Likert scale, respondents indicate their degree of agreement with positively or negatively worded items (e.g. Kwok et al., 2006). Therefore, the term \"item\" refers to the statements or the survey questions ( Figure 2). Researchers can use the data from sets of highly correlated items to build summative indicators that estimate a respondent's position on the underlying construct or to form latent variables that indicate the construct relations (Chung et al., 2008; Nguyen and Barrett, 2006). Likert never provided theoretical justifications for this multi-item construct indicator approach, but rationales emerged under the label of test theory. Classical test theory involves a diverse body of psychometric methods that can identify the most discriminating, converging, and reliable items according to discrimination indexes, item-to-total correlations/-deletion coefficients, or loadings from factor analysis (e.g. Churchill, 1979; Witkowski et al., 2003). The final multi-item Likert scale consists of a reasonably small set of items that appear optimal with regard to some criteria, such as a Cronbach's a greater than 0.7, a widely accepted condition for measurement reliability.2.1 Advantages of the Likert approach\n3. Verbal rating scale development: In this section, we outline the three steps in a Thurstone scale development procedure: identification of appropriate items, scale calibration, and measurement evaluation (reliability and validity testing). We use data collected from student samples, which are appropriate for this illustrative purpose (e.g. Wong et al., 2008).3.1 Item identification\n4. Conclusions: Perhaps the most important conclusion from this article is that both theoretical and empirical evidence exists to consider Thurstone or verbal rating scales as at least viable alternatives to the more frequently employed Likert approach to attitude scaling. To shed more light on the use of verbal rating scales, we illustrate an actual Thurstone scale development procedure, including the item generation, scale calibration, and measurement evaluation steps, and test for its reliability and validity. Attitude measurement remains an important concern for all management fields, yet verbal rating scales are rarely employed, nor do standardized approaches to their development and evaluation exist. To this end, we offer theoretical and practical guidance, as well as sources of inspiration, for researchers who might consider employing or developing verbal rating scales.Our new verbal rating scale also provides several practical advantages for measuring perceived product quality in Japan. First, it offers a ready-to-apply measurement instrument for many types of data collection, including paper-and-pencil interviews, online questionnaires, and telephone surveys. The scale calibration process ensures that researchers may gather interval-level data with this scale, which offers greater flexibility for empirical analyses. Second, using a single question for stimuli evaluation, rather than multiple items, reduces the length of questionnaires. Therefore, the new scale should be more cost effective than multi-item quality measurements and reduce respondents' fatigue during the evaluation task. Third, respondents may find it easier to perform their quality evaluations on the new scale, because it uses common language expressions as scale anchors. Survey participants also may perceive a single, verbal rating scale as less cognitively demanding than multi-item Likert scales. Short questionnaires and comprehensible measurement help increase respondents' willingness to share information and may reduce the likelihood of misunderstandings. We provide the final version of our proposed verbal rating scale in Figure 6.Beyond these practical advantages, verbal rating scales offer appropriate operationalizations of the empirical response process, with its implications for measurement validity. Previous research reveals that misspecifications in the response process can lead to biased results due to the ranking of individual respondents (Roberts et al., 1999; Stark et al., 2006). According to Stark et al. (2006, p. 37), \"the inclusion of just a few items that do not meet the assumptions of a dominance model can markedly change the rank order of high-scoring individuals,\" which then leads to invalid behavioral predictions and reduced measurement utility. However, examples from recent literature suggest that studies frequently employ Likert scaling procedures even when the assumptions of the dominance response process cannot be met because the item wording includes moderate rather than extreme terminology (Appendix). Such a misspecification of the response process may have little influence on the accuracy of the predictions of construct relations though. The strength of the causal relations, such that higher quality leads to higher price acceptance, may be predicted sufficiently well by a poorly specified dominance approach with moderate Likert items, because correlation coefficients are relatively insensitive to rank-order changes. More research is needed to explore in greater detail why and in which conditions the dominance or ideal-point approach to item responses is more appropriate for improving measurement validity.We also recognize some limitations of our study that could benefit from additional research. First, the evidence that the new scale delivers interval-level data should be attenuated. Interval data can be assumed only to the extent that the respondents to our scale calibration study perceive the intervals between numerical anchors as truly equidistant (Figure 5). We do not test this assumption. Furthermore, response bias may exist in the calibration study, because the meaning of each candidate item may not be reflected accurately by the numerical categories. For example, respondents may associate good or bad luck with some numbers, such that their numerical preferences influence their response behavior. Studies of response bias in verbal rating scale calibration procedures would provide meaningful insights into an underinvestigated field.Second, we claim construct validity for our new scale, but we also note some constraints on that claim. Our study of construct validity is limited to how perceived product quality may be influenced by CE and the competitive situation of domestic and foreign brands. Further research should broaden this demonstration of construct validity, perhaps by building different nomological networks and relating the new measurement to other constructs, such as customer satisfaction, trust, company reputation, or employee satisfaction, as well as considering the potential moderators or mediators of such relations. A vast body of literature pertaining to such relations exists, but empirical investigations mostly refer to Likert-type measurements. Are these results replicable when we use Thurstone scales, and why might differences or communalities emerge? We also recognize that the reasoning we adopted to demonstrate the nomological validity of the new scale conflicts slightly with the reasoning we adopted to choose Thurstone scales for our attitude measurement. That is, we used a Likert-scaling procedure to operationalize the CE construct, but we had previously argued that Likert-scaling procedures may lead to validity problems. Because the CETSCALE items in our study express extreme attitude positions (rather than moderate positions), we believe the dominance response process should work well for this scale and do not believe that the misspecification of the response process presents a significant threat to our assessment of scale validity.Third, we note the limited generalizability of the new scale; it relies on data from samples of respondents who are obviously not representative of the wider Japanese consumer population. Our consumer samples consist of management students from a major Japanese university, who in 2007 lived in the Osaka region, which means they may use and perceive verbal expression differently than would consumers from other lifestyle segments or geographical regions. Further research should investigate if and to what extent the new scale generalizes across consumer segments. Scale developers should recognize that language evolution is a dynamic process, so frequently employed expressions at one point in time may disappear, even as new langue standards evolve. Kawashima (2001) highlights the importance and effects of changes in the Japanese language in particular. Later researchers inevitably must address the appropriateness of earlier measures; some of the expressions Myers and Warner (1968) and Angelmar and Pras (1978) used in their time likely are no longer appropriate in verbal rating scales for quality measurement today.Another interesting area for future research may be to develop verbal rating scales for cross-cultural comparisons. For example, Wongtada and Rice (2008) use item response theory to reveal that interpretations of measurement scores by Thai and Egyptian employees require adjustments to be meaningful. Instead of conducting a posteriori adjustments of cross-cultural survey data or measurement instruments, researchers might try to calibrate a priori culturally or language-invariant verbal rating scales, especially in regions that contain significant language diversity. For example, Chinese is a macro-language that comprises 13 sub-languages (ISO 639-3). Do verbal rating scales written in Chinese script require different expressions to improve the validity of comparisons across Chinese consumers who speak different languages or dialects?Finally, though the development of a verbal rating scale was the primary objective of this study, we also note its contributions to CE literature. Specifically, findings by Nijssen and Douglas (2004), which were limited to a single, foreign brand-dominated product category (cars) in the Netherlands, receive confirmation from our study, which features a culturally different context and five additional product categories (in addition to cars, laptop computers, soft drinks, sport shoes, theme parks, and internet search engines).\nNote: 1. The D-T scale consists of seven items: delighted, pleased, mostly satisfied, mixed (about equally satisfied and dissatisfied), mostly dissatisfied, unhappy, and terrible.Opens in a new window.Figure 1\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to investigate whether verbal rating scales are viable formats for attitude measurement through an application to Japanese consumers' product quality perceptions.\nDesign/methodology/approach: - The article notes theoretical differences between Likert-based and Thurstone approaches to attitude measurement. The paper illustrates a Thurstone scale development process.\nFindings: - The new scale possesses nomological validity; it correctly predicts how consumer ethnocentrism relates to product quality evaluations for brands in different competitive situations.\nPractical implications: - The convenient, ready-to-apply verbal rating scale can measure Japanese consumers' perceptions of product quality. The article also offers survey researchers some practical guidance for developing their own verbal rating scales.\nOriginality/value: - Verbal rating scales are rarely found in existing literature. This study sheds light on a frequently overlooked measurement scale format for measuring attitudes.\n",
"role": "assistant"
}
] |
Defining sustainable universities following public opinion formation process
|
[
"Higher education",
"Sustainability",
"Universities",
"Social responsibility",
"Holistic view for sustainability",
"Public opinion formation process"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In the early 70s, the scholars witnessed the rise of sustainability. Growing number of organizations realized the ruining of the environment would seriously weaken the ability to expand prosperity and economic justice. Although the approaches to sustainability differ according to the area of interests of the definer, each one highlights actions that are ecologically sound, socially just, economically feasible and caring (Clugston and Calder, 1999).\nTheoretical background: Corporate sustainability\nMethodology of the study: Research design\nFindings and results: The focus group discussions were recorded digitally, transcribed and anonymized. Afterwards, the transcripts were analyzed by scholars to see the repeating outcomes of the results. The results of the research conducted are summarized in the following sections based on the groups that we have focused on in the public opinion formation process.\nDiscussion and conclusions: The universities have a fundamental role for achieving sustainability in the society, as discussed above. They are the major actors who should pursue sustainability themselves so that it trickles across all layers of the society. However, when \"sustainable universities\" in Turkey are mentioned, there is no proper and accurate definition for the concept. Thus, this study aimed to form one by following the major steps in the public opinion formation process.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to define the sustainable university in Turkey, by considering perspectives of various stakeholders such as experts, intellectual, public, political parties and media using public opinion formation analysis. The paper aims to re-define the \"sustainable university\" with all dimensions including environmental, economic and social factors in Turkey.\nDesign/methodology/approach: In this study, the model of shaping the policy agenda and public opinion formation by certain groups, presented by Papadakis (1996), was used to determine the main characteristics of a sustainable university. Based on this model, the researchers collected data from intellectuals, experts, political parties, media and public simultaneously. Focus groups and archival search were used.\nFindings: The results of the public opinion formation process presented that the definition of sustainable universities in Turkey includes the economic, ecological and social aspects and a holistic view of different groups. The findings of the study presented that the definition of sustainable universities in Turkey has many facets. Different groups in the public opinion formation process share almost similar views, though these points are usually mentioned under different headings. Thus, nearly each respondent in the public opinion formation process discusses the concept of being a \"trade mark\" as a university.\nResearch limitations/implications: The research may lack generalizability, as it takes place in Turkey, which is a non-Western country.\nPractical implications: The research sheds a light for universities, which are the major cornerstones of higher education, especially in the area of sustainability and sustainable development. Also, universities have a great impact in regional development, which stresses once again the importance of sustainability in higher education. They should modify their education programs and curricula in accordance with sustainability. University-industry cooperation should be provided. They should manage to become a trademark.\nSocial implications: Universities being the major cornerstones of higher education play a vital role in regional development of countries; therefore, their sustainable development should be well handled to enable regional development.\nOriginality/value: The universities are the major actors which should pursue sustainability, as they affect society. The studies on sustainability and universities have been generally grouped under two main subjects in the literature. First group of studies highlights the support of universities for sustainability of corporations, whereas other studies stress the importance of becoming a sustainable university. There are not many studies on this subject which considers the public opinion formation process throughout the literature; therefore, this study contributes to the literature with this aspect. The study takes place in Izmir, Turkey, which is a non-Western country. Most of the studies on this subject take place in Western countries, so as the study is deployed in a non-Western country, it reflects a different point of view.\n",
"role": "assistant"
}
] |
Combining actor-network theory with interventionist research: present state and future potential
|
[
"Methodology",
"Interventionist research",
"Philosophy",
"Actor-network theory",
"Research practice"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The following discussion took place after Author No. 2 had given a seminar presentation on the basic ideas of ANT and how they had been applied in management accounting research:Author No. 1: Listen, I was intrigued by one of the points you raised back there, when you wondered why the authors of that interventionist ANT paper did not include themselves in the network of actors.\n2. The potential for combining IVR and ANT: 2.1 Key features of IVR\n3. Present state: realized combination of IVR and ANT in accounting research: 3.1 Data and method of analysis\n4. Discussion: Based on the interviews, we identified three reasons for the scant combined use of IVR and ANT. The first reason seems to be a certain epistemological dilemma, as perceived by some ANT scholars, according to which an ANT researcher should not have an agenda since anything can take place at a given phase of, for instance, implementing an innovation. This view resonates with the ANT credo that the acceptance of a fact or the success of an innovation should not be explained ex post facto by appealing to some essential qualities since these are only determined during the translation - a process which may also fail. Partially in relation to this principle Latour (2005) posits that researchers should not employ fixed theoretical concepts or frameworks when analysing their empirical material as these could pre-shadow the results. It would seem that some of our interviewees have interpreted this limitation in a wider sense, to imply that a researcher should not become personally involved in the reality that she is studying. However, as discussed in Section 2.2, in his recent works Latour (2004a, b, 2005) explicates that conducting research is always a political act and professes that currently ANT researchers wish to be \"much more engaged than what was possible with the scientist dream of a disinterested gaze\" (Latour, 2005, p. 256). It has been argued that this political project, which is also shared by the broader \"ANT and After\" community (see e.g. Dugdale, 1999), makes it possible for ANT researchers to help \"to enact some realities rather than others in specific empirical settings\" (Alcadipani and Hassard, 2010, p. 428).\n5. Conclusions: This paper was inspired by our ad hoc observations that although both ANT and IVR are employed in management accounting studies, for instance in those on innovation, they rarely appear in combination. The purpose of the paper was, therefore, to examine the philosophical and practical compatibility of ANT and IVR and search for explanations for their scant combined use. Towards this end, we conducted an analysis of accounting and MOS literature as well as interviewed eight accounting scholars who had applied IVR and/or ANT in their research. To reiterate our position, we have not tried to claim that interventionist ANT research would be somehow \"better\" than other approaches but to conduct a state-of-the-art analysis and reflect on its implications for research practice.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine the philosophical and practical compatibility of actor-network theory (ANT) and interventionist research (IVR) and search for explanations for their scant combined use. The scope of investigation is limited to accounting, management and organization studies (MOS), but the findings are believed to be applicable in other social sciences as well.\nDesign/methodology/approach: An analysis is conducted of accounting and MOS research in addition to interviewing eight accounting scholars who have applied IVR and/or ANT in their research.\nFindings: A comparison of the philosophical and other features of ANT and IVR suggests that they should be relatively easily combinable in studies. Based on interview material, three types of barriers to combining ANT and IVR are identified: perceived epistemological incompatibility, fear of going native or losing neutrality and academic tribes. However, subsequent analysis indicates that none of these forms an insurmountable obstacle to the combination.\nResearch limitations/implications: The combined application of ANT and IVR could benefit both IVR and ANT researchers in management accounting as it would enable them to conduct theoretically grounded studies on dynamic processes, such as the emergence and implementation of accounting innovations, to pose original research questions and to find new perspectives to accounting phenomena.\nPractical implications: Employing ANT and IVR in combination could increase organizational interest in management accounting research.\nOriginality/value: The paper contributes to the discussion on the compatibility of different research approaches and highlights ways in which researchers could benefit from combining ANT and IVR.\n",
"role": "assistant"
}
] |
The factors influencing CoP activities and their impact on relationship commitment and individual performance
|
[
"Organizational support",
"Offline CoP activities",
"Online CoP activities",
"Perceived risk of losing uniqueness of knowledge"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Nonaka and Takeuchi (1995), while emphasizing that the most consistent competitive advantage of a company is knowledge, asserted that a company generates competitive products and technologies via the creation of new knowledge and the expansion and application of this knowledge to the efforts of the entire company. As the importance of knowledge resources became more evident, the field of knowledge management emerged in order to support and facilitate the creation and sharing of new knowledge on the basis of knowledge resources, with or without form. Companies have hurried to adopt knowledge management, and developed knowledge management systems, because knowledge management emerged as a future strategy for corporate survival. However, in actuality, such knowledge management strategies adopted to create and share \"organizational knowledge\" resulted in ineffective \"data piles\" that contain only past information (Jeon, 2006; McDermott, 1999; Wenger, 2004). In order to solve this problem, and to accomplish knowledge sharing and creation, what has been suggested on the basis of a \"human network\" is a CoP (Community of Practice, hereafter CoP, is also called as practice community, learning organization, learning club, study group, and so on) (Leonard-Barton, 1995).\n2. Theoretical background: 2.1 Online and offline CoP activities\n3. Research variables and hypotheses: 3.1 Affirmative affect\n4. Research methodology and data analysis: 4.1 Data collection\n5. Conclusions and implications: According to the results listed above, our findings suggest several implications. The perceived risk assessed in this study was determined to be an inhibitor of the online CoP activities. We surmise that this is attributable to the risk of losing the uniqueness of one's own knowledge as the result of online CoP activities. Online CoP activities can be facilitated by eliminating such an inhibitor. In this study we suggested organizational support, and used it as a variable to measure this factor. Herein, organizational support was measured in terms of providing incentives for CoP activities, promoting CoP activities, and overall level of support. As can be observed in regard to the measurement items, organizational support reflects the organizational environment: constant CoP promotion activities reflect constant organization-level interest in a CoP, and the provision of incentives by adequate rewards can facilitate CoP activities. Such factors include the ones influencing the organizational climate that have a positive effect on knowledge sharing intentions (Bock et al., 2005). Based on the results of this study, organizational support is expected to neutralize the influence of perceived risk, and hindrances to online CoP activities can thereby be eliminated.\n6. Limitations and future research: The limitations and future research direction of this study are as follows. First, the factors influencing participation in the CoP activities are limited. This study was conducted on the basis of Triandis' model in relation to factors that influence the CoP activities. However, in cases in which CoPs within a company exist in an organization, the CoPs are influenced by the organizational climate, organizational culture, and so on, as was illustrated in the study conducted by Bock et al. (2005). Although this study attempted to assess organizational culture in accordance with the social norm variable, it measured the social norm only with regard to the level of respondents' perceptions of others' recognition. In future studies, variables including fairness, organizational innovativeness, and affiliation of an organization could be added in order to assess the characteristics of CoP activities that differ with the various social and cultural environments.\n",
"role": "user"
},
{
"content": "Purpose: The objective of this study is to identify the factors that affect CoP activation and performance variables obtainable through CoP activities, and to gain greater insight into their relationships and the mechanisms. In particular, this paper intends to illustrate the role of perceived risk factor for the loss of uniqueness of one's own knowledge in terms of their influence on CoP activities.\nDesign/methodology/approach: In this study, the human behaviours were divided into online and offline CoP activities and adopted affirmative affect and social norm from the Triandis model. In addition, the paper considered perceived expectation, perceived risk, and organization support as independent variables. These would accelerate online and offline activities in the community of practice. The paper considered relationship commitment and individual performance in the context of performance evaluations via CoP activities. A structural equation model was developed with research variables and hypotheses.\nFindings: As the consequence of the empirical assessment of the variables influencing the on/offline activities of a CoP, social norm, perceived expectation, perceived risk, and organizational support showed significantly influential relationships with online activities, and affirmative affect, perceived expectation, and organizational support evidenced significantly influential relationships with offline activities. However, with regard to online CoP activities, affirmative affect was not shown to be significant. As to offline activities, perceived risk was not shown to be significantly influential, while it was determined to significantly influence online activities in a negative direction.\nOriginality/value: The results of this study demonstrated that on/offline CoP activities were significantly influential in terms both of relationship commitment and individual performance.\n",
"role": "assistant"
}
] |
The entrepreneurial profile after failure
|
[
"Exit",
"Decision trees",
"Entrepreneur profile",
"Institutional dimensions",
"L26"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The process of the entry and exit of companies from a market is the key to the productivity of national economies. As such, this process has been extensively studied in the literature, and it is well known that it can be determined by specific factors of the company, industry and country (Carree and Thurik, 1996; Fok et al., 2009). In contrast, the process by which individuals decide to enter and exit an entrepreneurial venture is less well understood. In this sense, one open question relates to the potential effects that the exit or failure of a business can generate on an entrepreneur. Following Hessels et al. (2011), based on the Becker's theory of human behaviour, this work considers entrepreneurial exit as an indicator of accumulated entrepreneurial human capital, which is reflected in the acquisition of knowledge, skills and experience.\nTheoretical framework: Exit, business failure and individual consequences\nData and methodology: The purpose of this paper is to examine the profile of entrepreneurs that failed in previous businesses, but despite their failure, decide to start or maintain another business. To do so, data from the GEM's Adult Population Survey 2016 for Chile were used. The original full data set consists of 9,196 records.\nResults and discussion: Table III shows the results of the ANOVA procedure. Cells contain the mean, standard deviation, standard error of the mean, number of observations and the F-test metric. The last column provides information about the hypothesis related to each variable and the result of this.\nConclusions: The main conclusion of the study is that there is a different entrepreneurial profile between entrepreneurs who fail (re-entrepreneurs) and those who have not failed recently (first-time entrepreneurs) in Chile. This profile is mainly defined by assuming the role of angel investors, sharing ownership of their business, having business skills and levels of education. These four variables have been validated by the three methods that were applied in our study. This finding contributes to strengthening the importance of failure in the process of entrepreneurship, both regarding the positive and negative consequences for individuals. This is also in line with several previous studies on the subject, mainly in countries that are not a part of Latin America (Hessels et al., 2011; Ucbasaran et al., 2010).\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine the profile of entrepreneurs who failed in previous businesses, but despite their initial failure, decide to start or maintain another business. Based on the literature, re-entrepreneurs present some unique characteristics that allow them to overcome and learn from their previous ventures. Moreover, following the institutional dimensions literature, the paper shows that these dimensions should also affect the prevalence and type of re-entrepreneurial profiles.\nDesign/methodology/approach: Traditional ANOVA and logistic regression analysis were used as a way to test several hypotheses. Moreover, decision trees were implemented to provide further insights with respect to the characteristics of the relationships under study. Data from the Global Entrepreneurship Monitor Survey (Adult Population Survey 2016 for Chile) is the main source of information.\nFindings: The results of this paper show that for Chilean entrepreneurs, there is a different profile between those who fail and abandon and those who decide to start a new business after failing. These differences show that post-failure entrepreneurs behave like angel investors, have better business skills and are more experienced. Additionally, they are more likely to share ownership of their new business.\nOriginality/value: The paper contributes to the literature with empirical results that validate that failure determines special characteristics in entrepreneurs through a methodology that is relatively new in the community. Also, the paper discusses how institutions (institutional theory) could play a role in this phenomenon. This can help decision makers and the academic community better understand the nature of this type of entrepreneur and thus provide insights for more focussed public policies and better-informed business decisions.\n",
"role": "assistant"
}
] |
Setting our bibliographic references free: towards open citation data
|
[
"Semantic publishing",
"Open access",
"Citations",
"Open citation corpus",
"References",
"SPAR ontologies"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: We are living in the early part of the decade of open information. Following a spate of recent reports and government policy statements (Boulton, 2012; Finch, 2012; American Meteorological Society, 2013; Burwell et al., 2013; New South Wales Government, 2013; Research Councils UK, 2013; Wellcome Trust, 2013), it can be fairly stated that the policy debate on open access (OA) has been won. Interest is now focused on implementation of the open agenda.\n2. What exactly is a reference within a scholarly document?: The act of bibliographic citation - a scholar referencing the published work of others, that usually originates \"from the interestingness of the phenomenon that has been addressed in the old article and considered again in the new one\" (Liu and Rousseau, 2013) - is central to scholarly communication, permitting the attribution of credit and integrating our independent research endeavours - as envisioned by Cameron's work on promoting the creation of an universal citation database (Cameron, 1997). Citations knit together the whole world of scholarship into a gigantic citation network, a directed graph with publications as the nodes and citations as the links between them. Scholarly communication involves the flow of information and ideas through this citation network, and analysis of changes in the network over time can reveal patterns of communication between scholars and the development and demise of academic disciplines - for instance as suggested in de Solla Price (1965), Small (1973), Small and Griffith (1974).\n3. Drawbacks of currently available sources of citation data: 3.1. Cost\n4. The OCC: The OCC, a new open repository of scholarly citation data, is attempting to improve the current situation. Its objective is to provide accurate bibliographic citation data that others may freely build upon, enhance and reuse for any purpose. It does this by making the citation data available under a Creative Commons CC0 1.0 public domain dedication[19], without restriction under copyright or database law.\n5. Towards semantic publishing: 5.1. Machine-readable metadata\n6. New tools and interfaces to handle citations better: Scholarly communication, at this mid-point in the digital revolution, is in an ill-defined transitional state that lies somewhere between the world of print and paper and the world of the Web and computers, with the former still exercising significantly more influence than the latter. We now publish online, but five years after our own semantic publishing exemplar was put online[47] (Shotton et al., 2009), the majority of papers are still published in the form of static PDF documents. We need tools to facilitate the transition of the online paper from its present \"horseless carriage\" state into the scholarly communication equivalent of a Ferrari.\n7. Building venice: a vision: Imagine that we are trying to create the bustling city of Venice - the city of scholarly communication - from a collection of islands that represent individual scholarly publications. Citations are the bridges that enable people to pass from one island (e.g. a conference paper) to others (e.g. journal articles and book chapters). At present, while local travel to the next island is permitted, unrestricted travel over the entire network of bridges requires an expensive season ticket, affordable only by rich professionals. The general populace is excluded, and as a consequence the social and commercial growth of the city is stunted. In contrast, were the bridges to be opened without fee to the general populace, people would be able to travel freely through the whole city of scholarly knowledge, and the community would thrive.\n8. Conclusions: In this paper, we have presented the OCC, born from the need to make scholarly citation data OA, the SPAR Ontologies (including CiTO, the Citation Typing Ontology) that support that work, and some prototype tools that facilitate the annotation of references with CiTO properties, thus enabling the description of the purpose of a citation. Our hope is that scholars, publishers and institutions will build upon, enhance and reuse the citation data contained within the OCC. We also seek their assistance to increase the content of the OCC, for instance by adding also acknowledgment references to the dataset, since they can be actually used to \"indicate influential contributions to scientific work\" (Giles and Councill, 2004); to further develop user interfaces for searching, browsing and visualising the network of scholarly citations; and to enhance tools such as the CiTO Reference Annotation tools that facilitate the annotation of citations by humans, aided by an automated recommendation system, CiTalO, that assists that process by proposing the most appropriate CiTO properties to describe the nature and purpose of citations from analysis of their textual contexts.\n",
"role": "user"
},
{
"content": "Purpose: Citation data needs to be recognised as a part of the Commons - those works that are freely and legally available for sharing - and placed in an open repository. The paper aims to discuss this issue.\nDesign/methodology/approach: The Open Citation Corpus is a new open repository of scholarly citation data, made available under a Creative Commons CC0 1.0 public domain dedication and encoded as Open Linked Data using the SPAR Ontologies.\nFindings: The Open Citation Corpus presently provides open access (OA) to reference lists from 204,637 articles from the OA Subset of PubMed Central, containing 6,325,178 individual references to 3,373,961 unique papers.\nOriginality/value: Scholars, publishers and institutions may freely build upon, enhance and reuse the open citation data for any purpose, without restriction under copyright or database law.\n",
"role": "assistant"
}
] |
Walking the tightrope: how rebels "do" quality of care in healthcare organizations
|
[
"Organizational performance",
"Accounting",
"Quality healthcare",
"Rebels"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Quality improvement in healthcare is increasingly concerned with the standardization of professional behaviour and organizational routines. Starting with the \"discovery\" of practice variation in the 1970s, methods such as clinical guidelines, accreditation, quality registries and performance indicators have been developed to make healthcare practices increasingly measurable, comparable and governable (Bonde et al., 2018; Wallenburg, Quartz and Bal, 2019; Weggelaar-Jansen et al., 2018). This \"system\" approach to healthcare has been fostered by the release of the \"Institute of Medicine (2000)\" report in and its numerous follow-up accounts about the importance of improving quality of care through building organization-based systems of quality control and improvement. A striking example is the method of \"root cause analysis\", a systemized procedure for investigating clinical errors that have quickly spread as an optimal and universal practice of preventing (future) medical errors as it detects organizational risks and encourages mutual learning (Nicolini et al., 2016). It is generally believed that \"best\" practices, such as root cause analysis, and subsequent implementation of those best practices in other organizations should guide the way to better and safer care (Zuiderent-Jerak and Berg, 2010). This system approach (also termed \"Safety I\") is, however, criticized for its strong focus on organizational solutions to clinical risks, overlooking the dynamics and uncertainties that characterize clinical work (e.g. Liberati et al., 2019; Dixon-Woods et al., 2009; Wallenburg et al., 2013). Furthermore, it is argued that focusing on risks and errors neglects (and fails to appreciate) the creative and tacit work professional carry out in daily clinical practice that prevents instead of cause clinical errors (Mesman, 2012; Waring and Bishop, 2010). These scholars argue for a Safety II approach focusing on \"what goes right\" and further improve quality of care by learning from those valuable examples (Hollnagel et al., 2013).\nSocial deviance in healthcare: In literatures on quality of care, \"positive deviance\" has been portrayed as a possible way out of the frequent remedy of further regulation and standardization. Opposed to a problem-focussed approach - meaning identifying and eliminating threats to a high level of performance - positive deviance presumes that knowledge about \"what works\" are available in existing organizations that demonstrate consistently exceptional performance (Bradley et al., 2009; Morrison, 2006). The positive deviance approach is based on the premise that every community has individuals or groups whose uncommon behaviours and strategies enable them to find better solutions to problems than their peers or external experts (Singhal and Bjurstrom, 2015). Positive deviants are pointed out as those social actors possessing the reflexivity, innovativeness, agency and spaces to act otherwise and to find different institutional solutions to do better (Singhal and Bjurstrom, 2015). They dare to \"rock the boat and stay in it\" (Bevan, 2013). Positive deviants (or \"quality radicals\" as Bevan calls them[1]) tend to shake up the organization, but do not intend to harm or leave the organization, rather seeking to care for the organization and its purposes (i.e. caring for patients). Such \"betterness\", it is contended, is often situated in the mundane and simple micro-actions carried out in everyday practices. An example is a nurse who uses her knuckle (not her fingertips) to press the elevator button in order to avoid contamination, seeking to reduce the spread of (possible) dangerous infection (Singhal and Bjurstrom, 2015). The positive deviance literature demonstrates the intelligibility of small and meaningful practice variation that engenders larger effects.\nMethods: Healthcare quality management in Dutch hospitals\nQuality rebels at work: Clinical microsystems of doing quality work\nDiscussion: The rebels portrayed in this paper like to do things somewhat differently. Steered by their intimate ways of clinical knowing and patient orientation, rebels put considerable effort into building and caring for their clinical microsystems. They define their own work area - materially, clinically and socially. In this research, we introduced the concept of \"context work\" and \"contexting\" to point out the professional and managerial practices of actively and continuously creating socio-material environments in which rebels could actively engage with actors within and outside their organizations to achieve aimed-for quality practices, such as safeguarding quality standards in case a manufacturer changes names for drugs, increasing the risk of errors in drug administration. Contexting thus takes shape both at the individual level of patients and at the level of organizing care. While in the literature, context is often depicted as something outside the control of actors, impinging on their practices, we have illustrated how rebel groups actively create contexts to do quality care and account for it. Furthermore, we have shown that contexting involves compelling normative work: rebels establish their own standards of optimal care and put a lot of effort in persuading others of those \"goods\". The active work is both goal oriented and strategic. Rebels \"walk the tightrope\" to safeguard legitimacy and to avoid harming the organization they are ultimately a part of. Hence, rebels conduct boundary work (Gieryn, 1983) in the sense that they define boundaries and care for their own microsystem; defining, facilitating and protecting the work being done. Yet they also intentionally cross boundaries to create outside contexts that enable and facilitate quality work, allowing them to legitimately carry out their activities in the hospital environment.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to empirically explore and conceptualize how healthcare professionals and managers give shape to the increasing call for compassionate care as an alternative for system-based quality management systems. The research demonstrates how quality rebels craft deviant practices of good care and how they account for them.\nDesign/methodology/approach: Ethnographic research was conducted in three Dutch hospitals, studying clinical groups that were identified as deviant: a nursing ward for infectious diseases, a mother-child department and a dialysis department. The research includes over 120 h of observation, 41 semi-structured interviews and 2 focus groups.\nFindings: The research shows that rebels' quality practices are an emerging set of collaborative activities to improving healthcare and meeting (individual) patient needs. They conduct \"contexting work\" to achieve their quality aims by expanding their normative work to outside domains. As rebels deviate from hospital policies, they are sometimes forced to act \"under the radar\" causing the risk of groupthink and may undermine the aim of public accounting.\nPractical implications: The research shows that in order to come to more compassionate forms of care, organizations should allow for more heterogeneity accompanied with ongoing dialogue(s) on what good care yields as this may differ between specific fields or locations.\nOriginality/value: This is the first study introducing quality rebels as a concept to understanding social deviance in the everyday practices of doing compassionate and good care.\n",
"role": "assistant"
}
] |
Competitive priorities and knowledge management: An empirical investigation of manufacturing companies in UAE
|
[
"Knowledge management",
"Customer focus",
"Knowledge-intensive environments",
"Manufacturing capabilities"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Knowledge management (KM) is recognized as an important tool for sustaining competitive advantage and improving performance. The evaluation of KM performance has become increasingly important since it provides directions for organizations to enhance their competitiveness. The twenty-first century is the era of knowledge economy, in which most organizations possess knowledge that enables them to improve their performance (Zaied et al., 2012). Organizations that build up and leverage knowledge resources attain greater success than firms that are more reliant on tangible resources (Autio et al., 2000). KM is ever becoming a primary business function for many firms as well as for individual projects as they realize that organizational competitiveness pivots on the effective management and creation of knowledge (Ajmal et al., 2010).\n2. Literature review: 2.1. Manufacturing capabilities\n3. Research setting and analysis: 3.1. Overview of AHP\n4. Discussion and conclusion: The contemporary aggressive manager has acknowledged for long, the role of KM in improving customer satisfaction and business growth (Anantatmula, 2007). This vital recognition has helped him identify the key attributes of KM. But, at the same time, there is a lack of research that highlights the role of KM attributes in fostering the competitive priorities of manufacturing firms, especially in growing economies such as UAE.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to identify attributes of knowledge management (KM) for large and small manufacturing companies in United Arab Emirates (UAE) and to propose a process to evaluate the competitive priorities of these companies.\nDesign/methodology/approach: This research is designed by composing multi criteria decision making of KM attributes in order to identify competitive priorities of large and small manufacturing companies to achieve better customer focus. To deal with this complexity of multi criteria decision-making process, Analytical Hierarchical Process (AHP) method has been used in this research.\nFindings: Results show that large manufacturing companies are putting more emphasis on \"Know-Why\" knowledge attribute in all four competitive priorities for achieving customer focus. While, small size manufacturing companies put higher weight on \"Know-What\" knowledge attribute by focussing on two competitive priorities (flexibility and quality).\nResearch limitations/implications: Results are derived from a limited number of empirical data only in one country, therefore these cannot be generalized. Future research with larger samples of small and large manufacturing firms from other countries is needed.\nPractical implications: To be competitive, manufacturing companies must reshuffle their production strategies to allow them to play a role in global knowledge-intensive market. Therefore, they must incorporate knowledge attributes as a way to achieve higher levels of performance.\nOriginality/value: This study is among the first and most exhaustive ones carried out in the small and large size firms operating in the manufacturing sector of UAE. It provides a systematic approach to identify the operational competencies and knowledge attributes being followed by large and small manufacturing companies.\n",
"role": "assistant"
}
] |
Identification, intentions and entrepreneurial opportunities: an integrative process model
|
[
"Opportunity recognition",
"Entrepreneurial intentions",
"Entrepreneurial identification",
"Opportunity exploitation"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The recognition and exploitation of opportunities, defined as \"those situations in which new goods, services, raw materials, and organizing methods can be introduced and sold at greater than their cost of production\" (Shane and Venkataraman, 2000, p. 220), have been described as the defining features of entrepreneurship (Shane, 2003; Shane and Venkataraman, 2000). In providing explanations of which actors are likely to recognise and exploit opportunities, several authors drawing upon Ajzen's (1987, 1988, 1991) theory of planned behaviour have pointed to the (nascent) entrepreneur's intentions, in this context meaning her degree of commitment toward the target behaviours of opportunity recognition and exploitation (e.g. Bird, 1988; Katz and Gartner, 1988; Krueger, 1993; Krueger and Carsrud, 1993; Krueger et al., 2000; Lortie and Castrogiovanni, 2015). While this research has provided valuable insight into who is likely to recognise and exploit an opportunity, it also raises an important question: if entrepreneurial intentions predict entrepreneurial outcomes, what factors influence an actor to have those intentions?\nEntrepreneurial identification and intentions: As illustrated in Figure 1, this study suggests that entrepreneurial intentions mediate the relationship between an individual's identification as an entrepreneur and entrepreneurial behaviours. Here this manuscript begins to outline the relationship between identification and intentions.\nDiscussion: Drawing on insights from identity theory and Ajzen's theory of planned behaviour, this study provides a significant contribution to research exploring the antecedents to opportunity recognition and exploitation. The model depicted in Figure 1 represents an aggregated view emergent in the literature on identification, intentions and opportunities. Given the influence of Ajzen's seminal work on entrepreneurship research, his framework was used as a starting point. From that starting point, this study slots in entrepreneurial identification in the place of attitudes as a more accurate psychological predictor of intention formation (P1 and P2). This study then posits the variables derived from the entrepreneurship literature most logically likely to influence the relationship between identification, intentions and opportunity recognition and exploitation - entrepreneurial munificence and domain relevant knowledge - as moderators of the relationship between intentions and entrepreneurial behaviour (P3a and P3b) and as direct effects on identification (P4). Adapting straightforward and well-established cues from the identification literature to a new context, this study finally places social norms as a direct influence on identification (P5) and suggests entrepreneurial behaviours will influence affective commitment to and the situational relevance of a person's entrepreneurial identity (P6). This paper concludes with a brief discussion of limitations, future research opportunities and contributions to theory and practice.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to explore one of the cognitive processes whereby persons develop intentions to enact entrepreneurial behaviours via exploring entrepreneurial identification as a significant influence on an individual's intentions to recognise and exploit opportunities.\nDesign/methodology/approach: In this conceptual paper, insights from Ajzen's theory of planned behaviour are utilised to develop propositions regarding the influence of entrepreneurial identification on intentions to search for opportunities and commit resources to exploiting opportunities. The role of two behavioural controls - entrepreneurial munificence and domain relevant knowledge - are discussed as influencing both an individual's ability and desire to recognise and exploit opportunities. Opportunity recognition and exploitation are also suggested to reinforce an individual's entrepreneurial identification, creating a recursive relationship between identification and entrepreneurial behaviour.\nFindings: The lack of insight into cognitive processes has contributed to what some have labelled the \"death\" of intentions research in entrepreneurship. In exploring identification, this study suggests that measuring identification - in place of attitudes toward entrepreneurship - may actually refine analytical models based on Ajzen's theory of planned behaviour.\nOriginality/value: Ajzen's theory of planned behaviour has often been applied in entrepreneurial studies, but for the voluminous body of research devoted to intentions, little has delved into the cognitive processes whereby people develop intentions to entrepreneurial behaviours. This paper provides a conceptually unique marrying of identification with entrepreneurial intentions in an attempt to redress this gap in the entrepreneurship literature. This study also speculates as to how practitioners - educators, legislators or managers - might profitably use this study to guide their efforts to reinforce entrepreneurial behaviours in the classrooms, organisations and communities.\n",
"role": "assistant"
}
] |
Measuring customer experience in physical retail environments
|
[
"Satisfaction",
"Loyalty",
"Customer experience",
"In-store"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Since Holbrook and Hirschman (1982) first recognized the experiential dimension of customer behavior, numerous studies have stressed need for in-depth study of the role of customer experience (CX) in marketing, particularly in services (Edvardsson et al., 2005; Jaakkola et al., 2015). Marketing academics and practitioners agree that a largely positive CX will create a unique, sustainable advantage over time for any firm. CX is thus a cornerstone of marketing (De Keyser et al., 2015), and the theoretical knowledge established to date provides a foundation from which to develop fuller understanding of the concept.\n2. Conceptualization of CX: Poulsson and Kale (2004) define experience as a state produced in any individual in response to a stimulus. For Schmitt (1999a), experiences are triggered by specific stimuli; they are not self-generated but induced, and they have cause and purpose. Thus, experiences are not evaluations or affective states present in the individual but states that arise in the presence of a stimulus (Brakus et al., 2009). Goode et al. (2010) identify the component parts of an experience as thoughts, emotions, activities, and appraisals during or as the result of a stimulus. For Poulsson and Kale (2004), these states are profoundly interconnected and feed back into each other as the experience develops.\n3. Measurement of CX: While CX has received substantial theoretical development, little attention has been given to measuring CX in service research (Lipkin, 2016). While CX and service experience are often used interchangeably in service research (Klaus and Maklan, 2012), the terms differ. Whereas service experience is limited to the actor experiencing the service, CX encompasses direct and indirect customer contacts with the experience provider (e.g. brand, store). Since this study analyzes ISCX, the customer is the actor experiencing the service. As the context of analysis is the physical retail store, CX is inherently determined by the direct interaction between customers and other actors in the service encounter (Tax et al., 2013).\n4. Components of CX: Based on the literature review's findings, this study analyzes ISCX in the retail setting as a holistic construct that includes the customer's internal responses to the service stimuli (cognitive, affective, and physical) and the customer's social interaction with other actors involved in the service encounter. The following sections develop theoretical definitions of each type of response.\n5. Development of the scale to measure ISCX: This section proposes the scale to measure ISCX. To determine the scale's configuration and avoid measurement misspecification, the dimensionality, nature of the relationships, and composition of the scale's components must be determined (Diamantopoulos et al., 2008; Jarvis et al., 2003; MacKenzie et al., 2011. First, one must establish the scale's dimensionality. If the components of ISCX describe unique aspects of the construct, and the exclusion of any of the components restricts its conceptual domain, the construct must be considered as multidimensional from a conceptual perspective (MacKenzie et al., 2011). Second, one must define the nature of the relationships. If three conditions are fulfilled - the components of the ISCX are considered as defining characteristics, the construct ISCX is a function of its components, and a change in only one of the components can be associated with a change in the focal construct - then the components are best thought of as formative indicators of the second-order focal construct (Diamantopoulos et al., 2008; MacKenzie et al., 2011). The third aspect to consider is the composition of ISCX components. If eliminating one or more items does not compromise each component of ISCX conceptually, then the components of ISCX can represented by reflective indicators (Bollen and Lennox, 1991; Borsboom et al., 2003; Jarvis et al., 2003; MacKenzie et al., 2005).\n6. Discussion: 6.1 Theoretical contributions\n",
"role": "user"
},
{
"content": "Purpose: In a world where customer empowerment is continuously increasing and changing the service landscape, retailers must provide memorable shopping experiences to retain and attract new customers. When customers decide to go shopping in physical stores, they expect to enjoy their visit, experiencing cognitive, affective, social, and physical responses evoked by in-store stimuli. The purpose of this paper is to propose and validate a scale to measure in-store customer experience (ISCX).\nDesign/methodology/approach: This study's theoretical review of customer experience (CX) demonstrates that a formative model provides the best structure for measuring the construct ISCX. Furthermore, the study follows the guidelines for rigorous construction of a formative scale, which include three main stages: generation of items, scale purification, and assessment of scale validity and reliability.\nFindings: The results provide evidence that a formative third-order scale with a reflective second-order dimension (social experience) and three reflective first-order dimensions (cognitive, affective, and physical experience) has satisfactory psychometric properties. The findings also provide useful information on the effect of the ISCX scale on key performance variables such as satisfaction and loyalty to the store.\nOriginality/value: The ISCX scale proposed constitutes a useful multi-concept diagnostic tool for use by retailers to create fully experiential shopping environments with differential value for the customer. By providing a complete, robust, precise measure of CX in a retail environment, the scale gives researchers a structured way to examine the causes and consequences of CX in retail.\n",
"role": "assistant"
}
] |
Determining business models in bottom-of-the-pyramid markets
|
[
"Decision making",
"Bottom‐of‐the‐pyramid (BOP)",
"Analytic network process",
"Interdependency",
"Bottom up decision making"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: Multinational companies are striving to become more competitive and profitable by exploiting opportunities in new markets. The bottom-of-the-pyramid (BOP) segment represents a particularly notable market that is exhibiting rapid growth. The BOP refers to the segment that includes the poorest socio-economic group: those earning less than 1,500 USD in annual income. Due to low prospects for profitability in markets composed of these consumers, many multinational companies have deemed them to be of little importance. Nevertheless, BOP markets hold many potential advantages in terms of their size, rapid growth, less competitive environment, and opportunities for innovation. In other words, the potential of these markets is undeniable and has just started to gain worldwide recognition. However, current business models are not appropriate for BOP markets because they are characterized by different income levels and distribution systems. Due to the unique characteristics of BOP markets, many studies analyzing success factors for BOP markets in particular have been conducted. As a result, several criteria applicable to new BOP markets have been established. However, in order to apply these criteria, a new problem-solving methodology is needed for two reasons. First, the selection of an effective solution requires the use of a model that provides an overall analysis of each alternative based on various criteria. Second, the influence of interdependencies among criteria relevant to BOP markets has been addressed only in a few studies thus far. Therefore, this study aims to develop criteria that are especially suitable to the BOP environment and proposes a model that allows for potential alternatives to be prioritized and accounts for interdependencies among the criteria.\n2 Background and research motivation: 2.1 BOP market\n3 Proposed methodology: The proposed methodology selects one alternative solution through the following three steps. First, we create and analyze a list of selection criteria and possible alternatives. Next, we construct an ANP model from our list. Finally, we convert this ANP model into a supermatrix and calculate the relative importance of the alternatives. The following image illustrates each of the three steps in greater detail (Figure 2).3.1 Identifying the selection criteria and alternatives\n4 An illustrative example: 4.1 Problem definition\n5 Conclusion: In this paper, we have analyzed how firms in BOP markets can make optimal decisions with regard to their product strategies. We have chosen to use ANP methodology to consider correlation effects among the factors that influence success in BOP markets. The proposed list of criteria consists of three clusters - product development, infrastructure, and customer service - as well as an alternative cluster that contains all the alternatives solutions. Through an illustrative example, we have demonstrated the process of prioritizing among possible alternatives using pairwise comparisons.We used ANP to create a network structure that can incorporate correlations among criteria that can are influential in BOP markets. Moreover, the study proposed an effective decision-making tool that takes correlations among criteria into account.The ANP model proposed in this study is not fixed. The included criteria and interdependencies can be modified according to the opinions of local experts. The study proposes three clusters and 11 criteria, but there could be additional elements that may influence success in BOP markets. By considering additional factors or providing more detailed specifications for the criteria for existing factors, it is possible to further improve the accuracy and reliability of the ANP model's results.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to develop a multi-criteria approach for determining business models in bottom-of-the-pyramid (BOP) markets.\nDesign/methodology/approach: - Analytic network process (ANP) was employed to construct a decision-making model of quantitative and qualitative factors relevant to BOP markets. Alternatives can be evaluated, respectively, and further business implications can be delivered to decision makers through continuous improvement of the model.\nFindings: - ANP is a tool that can address the interdependencies among decision elements and alternatives in the BOP markets. Moreover, it can be employed in structural analysis of the network of relationships among the selection criteria.\nPractical implications: - Decision makers can make more informed decisions by using the proposed approach, which is targeted toward BOP customers. This approach also overcomes the flaws of previous approaches.\nSocial implications: - The successful selection of business models for BOP markets can change how multinational companies think about BOP consumers, allowing the poor to be perceived as value-demanding customers. In addition, if multinational companies create new local business models, the quality of life of the poor could be improved.\nOriginality/value: - The consideration of interdependencies among the criteria relevant to the selection of successful business models in BOP markets is a novel conceptual contribution.\n",
"role": "assistant"
}
] |
Role of gender on the relationship between abusive supervision and employee's intention to quit in Indian electricity distribution companies
|
[
"Gender",
"India",
"Abusive supervision",
"Intention to quit",
"Role theory"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In the past few decades, there has been a steady increase of research on dysfunctional and counterproductive organizational behaviours (Robinson and Bennett, 1995; Anderson and Pearson, 1999; Duffy et al., 2002). Although the majority of these studies involve rank and file employees (Tepper et al., 2006), quite a few studies have also reflected the darker side of those in leadership or supervisory position (Tepper et al., 2001; Aryee et al., 2007; Schyns and Schilling, 2013). One such negative supervisory construct is abusive supervision which refers to an employee's perception of his/her supervisor's abusive and hostile behaviour towards the employee. These abusive supervisory behaviours include non-physical verbal and non-verbal behaviours such as verbal intimidation, withholding of critical information, use of aggressive body language, public ridicule and meting out silent treatment to the subordinate (Keashly, 1998; Tepper, 2000). Research confirms that these dysfunctional supervisory behaviours yield various adverse effects on an employee's work behaviour and job performance such as decreased job satisfaction, diminished organizational commitment, limited display of organizational citizenship behaviour, reduced job engagement and higher turnover intention (Tepper, 2000; Zhang et al., 2011; Martinko et al., 2013; Pradhan and Jena, 2016).\nTheory and hypothesis development: Despite the fact that most leadership studies have primarily focused on the constructive and productive leadership behaviours, recent years have witnessed a healthy spurt of researches related to abusive or deviant leadership behaviours (Wu, 2008). As discussed earlier, one such destructive supervisory construct is abusive supervision. Tepper (2000, p. 78) defined abusive supervision as \"subordinates' perceptions of the extent to which their supervisors engage in the sustained display of hostile verbal and nonverbal behaviours, excluding physical contact\". Though there are several constructs that explains negative workplace behaviours like petty tyranny, toxic leadership and workplace bullying, these constructs vary from Tepper's (2000, 2007) conceptualization of abusive supervision on four important characteristics.\nMethod: Participants and procedure\nMeasures: Abusive supervision\nResults: Before testing the hypotheses, we carried out univariate ANOVA analysis to analyse the gender differences prevailing in abusive supervision and intention to quit. The findings (Table I) revealed a significant gender differences for abusive supervision (F (1416) = 39.91, p < 0.01, d = 0.48) and intention to quit (F (1416) = 6.21, p < 0.05, d = 0.14). The scores of female respondents was found to be higher (M = 106.31, SD = 12.24) than their male counterparts (M = 91.93, SD = 15.51) in abusive supervision; similarly, for the intention to quit variable, female respondents got higher score (M = 18.41, SD = 7.93) than male respondents (M = 7.84, SD = 6.81). The computed effect sizes can be considered as average and small (Cohen, 1977).\nDiscussion: The current study serves two objectives: First objective was to investigate the relationship between employee's perception of their supervisor's abusive behaviour and their intention to quit. The second objective of the study was to explore the different reactions of female and male employees towards abusive supervision, i.e. to test the moderating role of gender difference on the relationship between abusive supervision and intention to quit. The result of the study (H1) supports previous findings that there is a significant relationship between abusive supervision and employee's intention to quit (Schyns and Schilling, 2013). Our study also found satisfactory support for H2, which asserts that subordinate's gender will moderate the relationship between abusive supervision and intention to quit. Specially, the relationship is found to be significantly positive for female subordinates, whereas the relationship is found to be insignificant for male subordinates.\nConclusion: Our research is one of the few studies which have conceptualized abusive supervision as an interpersonal stressor that influences subordinate's intention to quit. To establish our assertion, we have invoked COR theory, which explains how abusive supervision drains the subordinate's critical resources, thereby forcing the individual to consider quitting the organization. Our study, also investigated the moderating role of gender on abusive supervision-intention to quit relationship. We build our assertion on role theory which states that both the genders react differently to negative situations (in this case, abusive supervision). Our finding supports our proposition that female subordinates react strongly to abusive supervision than their male counterparts and quit organization more frequently than their male colleagues. We also provide insights for organizations into how to minimize the deleterious effect of abusive supervision by providing counselling, support and training to both the persecutor and the victim.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is twofold; first, to examine the relationship between employee's perception of abusive supervision and their intention to quit the organization and, second, to investigate the moderating role of gender differences on this relationship.\nDesign/methodology/approach: Data were collected from 227 employees working in an Indian electricity distribution company through self-report questionnaires using a time-lagged design on two occasions (span between T1 and T2 was 3 to 4 weeks).\nFindings: The finding of the study confirms that abusive supervision is strongly related to subordinates' intention to quit. The study also reveals that women employees intend to quit organization more often than their men counterpart, when they perceive their supervisors to be abusive.\nOriginality/value: The research provides insight into how an interpersonal stressor like abusive supervision depletes an employee's critical resources (conservation of resources theory) and thereby amplifies the employee's intention to quit. The study is among the first to also reveal different coping strategies (to quit or not to quit) used by male and female employee (in line with \"Role theory\") when faced with an interpersonal stressor like abusive supervision.\n",
"role": "assistant"
}
] |
Influences of workplace environment factors on employees' training transfer
|
[
"Environmental factors",
"Transfer of training",
"Motivation to transfer"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In human resources management and development, the transfer of training refers to a process by which trainees recall knowledge, skills and attitudes acquired from participation in training and subsequently apply them to their tasks. This suggests that a training program would amount to nothing unless the participants engaged in training transfer activities. From the past to the present of the rate of transfer, according to Naquin and Baldwin (2003), Grossman and Salas (2011), Cowman and McCarthy (2016), the rate of training/learning transfer is a mere 10 percent, despite the huge, ever-increasing expenditures on human resources training and development programs. \"Why does such an incompatibility exist?\" has thus become a byword of most practitioners in the field of human resources.\nResearch parameters and literature review: Training transfer (TT)\nResearch methodology: Participants\nResearch findings and discussion: In Table I, the means of the workplace environment (ENV), motivation to transfer (MT) and training transfer (TT) variables/factors were, respectively, 3.95, 4.17 and 4.15 (on the six-point Likert scale), indicating that most respondents agreed with the question descriptions. The corresponding skewness values were -0.84, -0.38 and -0.83, the negativity suggesting that most of the respondents gave scores higher than the average. Meanwhile, the corresponding kurtosis values were all positive 2.08, 1.73 and 2.64, indicating tall distribution curves.\nConclusion and discussion: In this empirical research, the findings revealed that the workplace environmental factors statistically significantly influence the participants' training transfer decisions and behavior (Figure 2), which is consistent with Baldwin and Ford (1988), Na-nan (2010), Pham et al. (2013), Noorizan et al. (2016) and Chauhan et al. (2016). The workplace environmental factors under study encompass organizational support, supervisor support, peer support, technological support and opportunities to utilize the acquired skills/knowledge. It can be said that there is a relationship between workplace environment and motivation to transfer and transfer training. In other words, the more conducive and supportive the workplace environment is, the higher the likelihood that employees will apply the training they acquire to the tasks they are responsible for (i.e. training transfer).\nImplications for practice: Based on the research results, we suggest several recommendations to enhance training transfer for practice of learning from classes to workplace.\nLimitation and future research: It is important to mention the limitations of the study in this research. Due to limited time and cost, this study was conducted among business school students. Future research would benefit from the use of larger samples or expanding other samples. As a quantitative methodology was used, it was not fully able to explore the in-depth data. A mixed method is suggested for future research.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to investigate the direct and indirect influences of workplace environment factors on the motivation to transfer the acquired training (MT) and the transfer of training. The workplace environment factors under study encompass the organizational support, supervisor support, peer support, technological support and the opportunity to use the acquired training.\nDesign/methodology/approach: In the study, the empirical data were garnered from a sample of 220 business school students, using a 52-question questionnaire. A research framework was first proposed and the structural equation modeling subsequently utilized to test the research framework and determine the direct influence of the workplace environment factors on the transfer of training and also their indirect influence on TT through the MT variable.\nFindings: The structural equation modeling model is congruent with the empirical data, with the kh2 statistic of 24.414, the degree of freedom of 21, the p-value of 0.273, the goodness of fit index of 0.977 and the root mean square error of approximation of 0.027. The findings revealed that the workplace environment factors directly influenced the training transfer behavior of the participants and indirectly through the MT of the individuals. Most importantly, the proposed model could achieve the coefficient of prediction (R2) as high as 82 percent.\nOriginality/value: Workplace environment is important to motivation to transfer and transfer of training. To enhance the training transfer effectiveness, businesses should take into account the contributions associated with each environment factor in the design and implementation of training programs.\n",
"role": "assistant"
}
] |
A modification of the FRAD model for personal names in non-Roman languages
|
[
"Authority control",
"Authority data",
"FRAD model",
"Non-Roman languages",
"Non-Roman scripts"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Functional Requirements for Authority Data (FRAD) was proposed in 2009 as a conceptual model for authority data. The FRAD, which was approved by the International Federation of Library Associations (IFLA), provides a framework for the analysis of FRAD needed to support authority control and for the international sharing of authority data (IFLA, 2013). As highlighted in the Virtual International Authority File (VIAF), the international sharing of authority data are an increasingly important issue. FRAD was designed as a common base for data sharing.\nLiterature review: Literature related to this topic is broadly divided into authority control of names in non-Roman languages and authority data modeling for non-Roman languages.\nDifferences among transliteration, transcription, and Romanization: The ISO 5127:2001 (Information and Documentation: Vocabulary) defines Romanization as the \"representation of non-Latin writing systems in the Latin alphabet by means of transliteration, transcription or both.\" The word transliteration means the \"representation of the characters of one writing system, alphabetic or syllabic, in terms of corresponding characters of a second writing system.\" Transcription is defined as the \"representation of the pronunciation of a given language by the characters of a writing system or by a specially devised system of notations\" (International Organization for Standards, 2001). The library community, however, frequently interprets transcription as \"the action or process of transcribing or copying\" (\nThe parent-child relationship between a vernacular form and its Romanization and/or transcription: As depicted in Table I, Romanization forms of names can be divided into two types. One is the name that is currently in use. For example, when a person who has a Chinese vernacular form of their name published in an English book, his or her Romanized name is represented in the Latin alphabet. In this case, generally the Romanized name is decided by the author, and there is no need to be in accord with one kind of Romanization scheme. In other words, the author chooses his or her \"English name,\" which is more common in the western country.\nReview of the current FRAD model: In FRAD, names expressible in several writing systems are processed by three elements: language, script, and transliteration scheme. Writing systems involve three name attributes (\"language of name,\" \"script of name,\" and \"transliteration scheme of name\") and three CAP attributes (\"language of Base Access Point,\" \"script of Base Access Point,\" and \"transliteration scheme of Base Access Point\"). The phrase transliteration scheme is synonymous with Romanization scheme, because all examples for transliteration shown in FRAD are, actually, examples of Romanization.\nA modification of the FRAD model for recording names in non-Roman languages: This study proposes a modified FRAD model that matches names in non-Roman languages as well as western languages. The model is based on the FRAD model, because despite the problems designated above, the model is accepted globally and can depict the structure of authority data comprehensibly. Given the problems in the current FRAD model, the following components were identified for a modification of the FRAD model to accommodate names in non-Roman languages.\nDiscussion and future work: The modified FRAD model proposed in this paper organizes several CAPs in non-Roman languages. Several writing systems embraced by non-Roman languages are accommodated by Romanization and transcription relationships, which are newly added to the relationships defined in FRAD. In addition, although the \"alternate script relationship\" of the original FRAD model includes both Roman and non-Roman transliterations, this refers only to non-Roman transliteration relationships in the modified model. Moreover, to circumvent the vague definition of transliteration in FRAD, the modified model makes the attribute \"transliteration scheme\" obsolete. Instead, the \"Romanization scheme\" is established in its place. The \"Romanization scheme\" attributes are assigned to the CAP based on the name, not \"to\" name. The entity name has a \"language\" attribute that is inherited by the CAP and any derived CAPs.\nGlossary: AACR2 Anglo-American Cataloging Rules (second edition)\nConclusion: This paper proposes a modification of the FRAD model for matching personal names in non-Roman languages as well as names in western languages. Attributes and relationships of the FRAD model are modified for clarification based on the definition of ISO 5127:2001. Because the proposed model only modifies attributes and relationships regarding representations of names in non-Roman languages, the model retains high compatibility with the original FRAD model. The ambiguous definitions of attributes and relationships in the FRAD model, which limit their applicability to names in non-Roman languages, are resolved in the modified model. This new model clearly illustrates two types of CAPs; those directly based on names and those derived from other CAPs. The latter should complement the data in their parent CAPs to assist identification. Thus, the parent and the child CAPs should be handled as a pair. The proposed model is universal in that it accommodates both names in non-Roman languages and names in western languages. The attributes and relationships proposed in this modified FRAD model should be considered in the construction of the new authority data format.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to propose a modification of the Functional Requirements for Authority Data (FRAD) model to accommodate names in non-Roman languages with diverse writing systems.\nDesign/methodology/approach: The model modifies and clarifies the attributes and relationships regarding non-Roman languages of the current FRAD model, based on the definitions of transliteration, transcription, and Romanization in ISO 5127:2001. The modified model overcomes the problems inherent in the current FRAD model when handling writing systems in non-Roman languages.\nFindings: The proposed model clearly shows the hierarchy of Controlled Access Points (CAPs). CAPs are either based directly on names or derived from other CAPs. For name or person identification, a CAP and its derivations should be treated as a set. The proposed model also resolves ambiguity in the definition of the word transliteration in the current FRAD model.\nOriginality/value: By establishing new relationships and amending the attributes of the FRAD model, the proposed model better expresses personal names in non-Roman languages as well as names in western languages. It also organizes complex writing systems of non-Roman languages. The proposed model could lead to an improved format for authority data and will facilitate international sharing of data.\n",
"role": "assistant"
}
] |
Who knows more about food allergies - restaurant managerial staff or employees?
|
[
"Training",
"Restaurants",
"Knowledge",
"Attitudes",
"Food allergies"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: A food allergy is an adverse reaction to a specific protein in food. Food allergies affect 15 million Americans, including six million children, and their prevalence is on the rise (Food Allergy Research and Education, 2017a; National Institute of Allergy and Infectious Disease, 2017). The evidence has also shown that self-reported food allergies among US adults (n>4000), who were randomly selected through a cross-sectional study, increased significantly from 9.1 percent in 2001 to 14.9 percent in 2006 and 13 percent in 2010 (Verrill et al., 2015). Changes in the environment, lifestyle and dietary habits (Sicherer, 2011) may have resulted in the increase of food allergies. While some symptoms of food allergic reactions are mild (i.e. rashes and itchy throat), the severe reaction known as anaphylactic shock could be lethal. Of 30,000 episodes of food-induced anaphylaxis that occurs in the USA each year, 100 to 200 of those lead to death (National Institute of Allergy and Infectious Disease, 2017). As individuals differ in how their food allergy is provoked, and the time and age of onset, diagnosis is difficult. Currently, the double-blind, placebo-controlled food challenge is considered the gold standard of food allergy diagnosis. There is no cure for food allergies, and only strict avoidance of allergens and a high level of vigilance can prevent allergic reactions to food (Food Allergy Research and Education, 2017a).\nMethodology: The Institutional Review Board of a university located in the southeastern part of the USA approved the research protocol prior to data collection. The target population of this study were the managerial staff (i.e. owners and managers), and front- and back-of-the-house employees (i.e. cooks, hosts and servers) working in restaurants in the USA.\nResults: Demographics and food allergy policies\nDiscussion and implications: Approximately 4 percent (nine million) adults and another 8 percent (six million) children in the USA are affected by food allergies (Food Allergy Research and Education, 2017a; National Institute of Allergy and Infectious Disease, 2017). Individuals with food allergies may have better quality of life if they are able to dine out with a sense of security (Hoogeveen et al., 2016). To ensure customers with food allergies have the opportunities to enjoy the pleasure of dining out, restaurants should be prepared to accommodate them. This study compared managerial staff and employees' attitudes toward food allergies, their food allergy knowledge and food allergy-related training.\nConclusion: The risk of accidental exposure to food allergens in the restaurants is high (Furlong et al., 2006; Versluis et al., 2014) that implies that the restaurant industry needs to be better prepare to prevent this life-threatening event. Managerial staff should make sure that policies are in-place in term of food allergy accommodation. Restaurant employees both front and back-of-the-house also play active role in the prevention of food allergic reactions. This study compared food allergy management, knowledge, and training among restaurant managerial staff and employees. Findings of this study indicated that restaurant managerial staff and employees both believed that food allergies have become more prevalent and customers should be responsible in expressing their food allergy needs. The managerial staff and restaurant employees have similar total knowledge scores but each has specific areas for improvement. There were gaps in term of frequency and perceived barriers in food allergy training provision between restaurant managerial staff and employees. Combined with other evidences with previous literature, this study presented opportunities to improve food allergies knowledge and training practices among those working in the restaurant industry.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to compare restaurant managerial staff and employees' attitudes toward food allergies, their food allergy knowledge and food allergy-related training.\nDesign/methodology/approach: An online questionnaire was administered through a market research company (Qualtrics(r) Inc.). A total of 110 managerial staff and 229 restaurant employees completed this questionnaire.\nFindings: Most restaurants were willing to modify recipes for customers with food allergies. Respondents felt that the customers should be responsible for expressing their food allergy needs. Both groups were able to identify certain symptoms of allergic reactions to food but lacked knowledge of allergen-handling practices. The managerial staff and employees had knowledge differences about how to respond to an allergic reaction (p<0.001) and how to identify peanut derivatives on food labels (p<0.000). In total, 70 percent of the managerial staff indicated that they provided employee food allergy training but only 40 percent of employees indicated receiving such training. The managerial staff identified a lack of employee commitment and interest as barriers to training provision. However, the employees identified different reasons (i.e. it is unnecessary and not beneficial).\nPractical implications: Restaurants should have policies in place to accommodative customers with food allergies and make sure food allergy-related information is communicated to their customers via different outlets (i.e. restaurants' websites and printed menus). Food allergy training should be designed based on the areas that need improvement, as knowledge may differ based on the position in the restaurants (managerial staff vs employees) and types of the restaurants (quick service vs casual dining). Pragmatic strategies need to be identified to better encourage and motivate restaurant employees to attend a food allergy training.\nOriginality/value: Understanding the similarities and differences in attitudes, knowledge and training on food allergies between restaurant managerial staff and employees would help restaurants to plan and implement policies and training that best fit both managerial staff and employees.\n",
"role": "assistant"
}
] |
Perceptions of fairness in financial services: an analysis of distribution channels
|
[
"Internet",
"Fairness",
"Channels"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: It is known that when developing robust buyer-seller relationships fairness is important (see Morgan and Hunt, 1994; Gronroos, 1994; Gebhardt, 2008; Aggrawal and Larrick, 2012). Undoubtedly interactions that are perceived as being fair have a positive outcome just as much unfairness will be viewed negatively because psychologically at least, fairness is about some form of justice. The notion of fairness is compelling for services because during each service encounter the customer makes a judgement about how fair or unfair the outcome was (Berry, 1995), with customers seldom attempting to obtain the best service but instead seeking fair treatment (Chen et al., 2012). For managers this means that they can no longer simply rely on creating a service with the expectation of a take-up particularly when comparisons about fairness are made (Carr, 2007). Therefore, fairness influences how customers judge a service.\nConceptualising fairness: Based on a review of the extant body of knowledge, our thesis is that a gap exists in terms of our understanding in relation to which a market position may, or may not, be perceived by customers to be fair. Where research has been undertaken it has tended to largely focus on the behavioural aspects of fairness and where there is a tangible product outcome. The role of fairness is significant because where there is a perception the relationship is unfair it will have a detrimental impact (Samaha et al., 2011). The conceptual focus of our work is customers' perceptions of fairness of financial services and we suggest that fairness is even more important in the case of financial services where evaluations regarding purchases sometimes cannot be made for many years. Further, as part of a process to develop buyer-seller relations, fairness can be positioned as a source of competitive advantage (Loch et al., 2012). Therefore, given a vantage point based on the body of literature, we estimate fairness's importance for financial services because it can help to overcome some of the barriers associated with the high-credence qualities of such an intangible service.\nMethodology: In our earlier work (see Devlin et al., 2014) we presented a fairness scale which we use as a basis for the measurement employed in this study to assesses perceptions of fairness amongst customers of financial services providers and draw conclusions regarding the impact of the distribution channel used. Given the contribution we are making by understanding perceptions of fairness, we treated the constructs as being latent. As an approach this is congruent to numerous other studies measuring perceptions where the protocol has been the use of latent constructs.\nData analyses and results: Data analyses were conducted in three stages. In the first stage exploratory factor analysis (EFA) was used to check for the unidimensionality of the fairness dimensions. In the second stage confirmatory factor analysis was used to establish the reliability and validity of the scales used to measure the constructs in our study. Finally, we utilised one-way ANOVA to test for the differences of fairness across the different channels.\nDiscussion and implications: The major contribution we set out to make was to complement the debate in relation to how fairness is perceived relative to different distribution channels, for financial services. This understanding of fairness is important given the contribution it makes to service operations management. Further, understanding fairness and the fair treatment of customers is of wider societal importance because of the interest shown by policymakers towards the fair treatment of customers. Specifically, this is important because of recent scandals of miss-selling in the UK where customers may have bought protection type products unwittingly. Thus, underpinned by a detailed nuanced approach to fairness and with data drawn from customers, we provide insights for both financial services practitioners and theorists alike.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine fairness within financial services. In making the contribution the authors examine fairness by four different channels to market and across a range of financial services products. The product categories in the study are those with the highest density levels in the UK.\nDesign/methodology/approach: Underpinned by the development of new measures, this paper is based on telephone interviews and on-line surveys with UK customers of financial services. More than 1,000 customers participated in the survey during the middle of 2013. After reporting the measurement model, the authors use ANOVA to report the differences in the perception of the dimensions of fairness by channel to market.\nFindings: The authors found there to be significant differences in perceptions using different channels to market. The research shows that where a face-to-face interaction takes place, such as branch contact, they are perceived to be fairer than when interactions are more remote. Given the dimensions of fairness, this reveals the importance of communications during explanations so that interactions are deemed to be fair.\nResearch limitations/implications: The focus of this research was the examination of fairness within the setting of the UK's financial services sector. The authors are minded that if the research is replicated in other countries or contexts then different aspects of fairness might emerge.\nPractical implications: Given the challenges faced by the financial services sector, there are implications for practitioners because they must be seen to be treating customers fairly. The research shows that remote contact such as the internet is not perceived as being as fair as face-to-face contact. The fair treatment of customers is likely to lead to positive brand benefits.\nOriginality/value: This study complements the understanding of fairness and provides insight for scholars and practitioners, within financial services.\n",
"role": "assistant"
}
] |
Modelling the cancellation behaviour of hotel guests
|
[
"Hotel management",
"Cancellations",
"Booking channel",
"Online booking",
"Probit model"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: A significant number of hotel bookings are not realised due to cancellations or no-shows. The typical reasons include sudden illnesses, accidents, schedule conflicts, unexpected (family) obligations and natural catastrophes. Although the option to cancel a hotel booking (preferably at a low cost) is beneficial to presumptive hotel guests, it is a less desirable and possibly revenue-diminishing factor for hotel managers to deal with (Chen and Xie, 2013). Such losses are particularly high on last-minute cancellations (Chen et al., 2011; Koide and Ishii, 2005). However, new technologies involving online booking channels have dramatically changed customers' booking possibilities and behaviour (Dolnicar and Laesser, 2007; Lee et al., 2013). This adds a further dimension to the challenge of how hotels handle cancellations, which are no longer limited to traditional booking and guest characteristics.\n2. Theoretical background and formulation of hypotheses: Several factors - whether connected to guests' decisions or motivations to travel or the booking process itself - can affect the cancellation of a hotel booking. In most cases, the travel decision (Sirakaya and Woodside, 2005) is made first and then potentially followed by a hotel reservation. In this context, a guest:\n3. Empirical model: Presumptive guests who have booked a hotel room face two choices: check in or cancel their booking (or simply not show up). No-shows cannot be explicitly modelled in this study because the share of no-shows in total cancellations is negligible with about 2 per cent. These two alternatives may be estimated by a probability model in which the outcome represents the individual decision of whether or not to cancel, which is assumed to be an unobserved latent variable. Here, the cancellation probability is modelled using a probit model (the individual index i is suppressed for the sake of convenience):\n4. Data and descriptive statistics: The data consist of individual bookings drawn from the hotel booking system of nine hotels belonging to a hotel chain located in Finnish Lapland for the period January 2011 to March 2016. It contains detailed information on guest- and booking-specific characteristics, including room prices, booking and arrival dates, dates of departure, room categories, number of guests (distinguishing between adults and children) and countries of residence (Falk and Vieru, 2018). In addition, the database reveals which of the following booking channels was used to book a given room:\n5. Empirical results: The probit estimations in Table AII show that the factors affecting the probability to cancel a hotel booking differ across channels. Among the factors identified, lead time and country of residence are the strongest determinants of cancellation probability. The share of correctly classified cancellations is about 92 per cent indicating a good fit. The likelihood of cancellation increases monotonically with booking lead time, meaning that late bookings are associated with a lower risk of cancellation. Overall, this is consistent with the theoretical prediction formulated in H2. An exception can be observed in bookings made via travel agencies, which follow a U-shaped pattern. Here, the cancellation probability is lowest for bookings made between 5 and 9 or 10 and 24 days prior to arrival. Early reservation increases the probability of cancellation for both online and offline bookings (by 52 and 11 percentage points, respectively, compared to the reference category of 1-4 days).\n6. Discussion and conclusions: This study provides new empirical evidence of the probability to cancel a hotel booking, based on unique data originating directly from a booking system. The main results of the probit estimations largely follow the hypotheses derived from the eclectic theoretical approach, where the likelihood of cancellations is significantly related to booking lead time, country of residence and season, as well as the composition and size of the of travel group. The timing of booking is crucial, with early bookings exhibiting a significantly higher cancellation probability. Another factor of importance is the channel, where those who book via travel agencies more frequently keep their reservations than those who book online or privately offline. The role of booking lead time and country of residence of the presumptive hotel guest in determining the cancellation probability is more pronounced for online than for offline or travel agency bookings.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this study is to provide new insights into the factors that influence cancellation behaviour with respect to hotel bookings. The data are based on individual bookings drawn from a hotel reservation system database comprising nine hotels.\nDesign/methodology/approach: The determinants of cancellation probability are estimated using a probit model with cluster adjusted standard errors at the hotel level. Separate estimates are provided for rooms booked offline, through online travel agencies and through traditional travel agencies.\nFindings: Evidence based on 233,000 bookings shows that the overall cancellation rate is 8 per cent. Cancellation rates are highest for online bookings (17 per cent), followed by offline bookings (12 per cent) and travel agency bookings (4 per cent). Probit estimations show that the probability of cancelling a booking is significantly higher for early bookings, large groups that book offline, offline bookings during high seasons, bookings not involving children and bookings made by guests from specific countries (e.g. China and Russia). Among the factors, booking lead time and country of residence play the largest role, particularly for online bookings.\nResearch limitations/implications: The analysis is based on individual-level booking data from one hotel chain in Finland, and therefore cannot be generalised for the total population of hotels in the country under observation.\nOriginality/value: The main contribution of this paper is a thorough investigation of the factors that influence cancellation behaviour at both the theoretical and empirical levels. Detailed and unique data from a hotel reservation system allow for new empirical insights into this behaviour.\n",
"role": "assistant"
}
] |
French football, foreign investors: global sports as country branding
|
[
"Globalization",
"Strategy",
"Brands",
"Qatar",
"Sport business",
"ESports",
"Football marketing",
"Paris Saint-Germain"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Football marketing and professional clubs toward new challenges: As football (soccer) becomes even more international, the financial stakes are growing exponentially due to the increase in broadcasting rights and investments of both private and public partners. When the Deloitte Football Money League was first published covering the 1996-1997 season, Manchester United was the leader, with revenue of PS88m that year. Fast forward 20 years, Manchester United has regained the top spot from Real Madrid, with EUR689m revenue, nearly six times greater than in 1997 (Deloitte, 2017).\n2. Qatar, sport, football and Paris Saint-Germain: toward a global strategy?: While French football has been marked in recent years by foreign investors, the acquisition of PSG by Qatar Sports Investments (QSI) suggests a singular model. Our research notes that this investment is part of the overall strategy for the development of Qatar, which has made sport a cornerstone of its growth strategy. Indeed, with an area of only 11,586 km2, the emirate of Qatar has become one of the richest states in the world thanks to its hydrocarbon resources. The globalization of the Qatar brand has made this tiny Persian Gulf country a diplomatic and economic player in a region with powerful neighbors like the United Arab Emirates and Saudi Arabia. Our study shows that Qatar sees, with the strengthening of its international brand associated with sport, a way to consolidate its diplomatic influence and to protect its independence. Moreover, it appears that this approach contributes to the Qatar National Vision 2030 which seeks to shift the country from a carbon economy to a knowledge economy. The Deputy General Manager of the club (in charge of commercial activities), Frederic Longuepee, comments:\n3. Strategic objective and organization of Paris Saint-Germain by Qatar Sports Investments: Since its acquisition by QSI in June 2011, PSG has set in motion a strategy to compete with the biggest sports brands in the world. The strategic objective is to make PSG a powerful and global professional sports brand like the New York Yankees or Real Madrid. PSG wants to become a brand of entertainment expected to go beyond the strict framework of sport. If sporting success is a precondition for the globalization of PSG brand, it is insufficient.\n4. Toward reorganizing a multisports club: PSG has considerable economic means to fulfill its purpose and promote its development. The new organization of the club is based on three key elements:\n5. A (re)construction of the Paris Saint-Germain brand: The club brand is strengthening its seasons and sporting achievements. PSG's win against FC Barcelona (4-0) in Round 1 of 16 (first game) of the Champions League (2016-2017 season) feeds the club's branding. However, FC Barcelona's historical comeback against PSG by beating the club 6-1 (second game) to reach the Champions League quarterfinals negatively impacted the brand of the French club. But the process of changing the PSG brand, symbolized by the new slogan \"Dream Bigger\", succeeded via:\n6. Global diversification of growth drivers: The expansion of the club is based on the diversification of growth drivers. The international dimension is now in the DNA of the club and the integrated logic of globalization in various marketing initiatives proposed by the club. PSG has a flair for innovation, especially in marketing: global and regional sponsorship arrangements; digital activities; merchandising; co-branding with VIPs (e.g. Rihanna, Beyonce, etc.); social program; and ticketing and lodging. They fit together, overlapping most of the time, with the globalization process of the brand-club. Two examples illustrate the international expansion strategy developed by PSG.\n7. Conclusion: Marketing has become a key function, or even a state of mind, required for the development of professional football, and it is crucial for professional sports organizations to differentiate their brand and to innovate with their customer service and sponsors. Our findings extend earlier studies like those of Couvelaere and Richelieu (2005), which underline that the development and implementation of a brand strategy should be profitable for most professional football teams.\n",
"role": "user"
},
{
"content": "Purpose: This research aims to examine the fundamental elements regarding Paris Saint-Germain (PSG) marketing strategy. Specifically, the objective of this analysis is the implementation of globalizing the club brand of PSG.\nDesign/methodology/approach: The author held three 2-h face-to-face meetings. Given that the main criterion in the choice of the respondent for a case study was the quality of the information that could be obtained during the interview, the professionals were selected because of their direct involvement in the PSG's global marketing strategy. Data collection was conducted with the aid of several in-depth interviews with the Deputy General Manager (in charge of commercial activities), Frederic Longuepee, the Corporate and Brand Communications Manager, Nicolas Serres and the Brand Diversification and Development Manager (in charge of eSports), Fabien Allegre.\nFindings: It appears that since the acquisition of PSG by Qatar Sports Investments, the club has entered a new era and has set in motion a strategy to compete with the biggest sports brands in the world. The results of this research reveal that the globalization strategy of PSG is itself part of Qatar's internationalization approach. The findings also suggest that the global marketing strategy of the club is based on three key steps. First was the reorganization of the club to create an omnisports brand with a new CEO at its head, which is highlighted here. The rebuilding of the brand followed, through the creation of new identity and logotype, the acquisition of brands-athletes and controlled communication. Finally, the development of the club which is based on the diversification of growth drivers.\nOriginality/value: Overall, these results contribute to the enrichment of sports clubs and global marketing research This research offers a unique perspective and insights on the marketing strategic globalization of a professional sports club linked with the internationalization of a country. Best practices become visible through this analysis, which might serve as guidelines in other contexts.\n",
"role": "assistant"
}
] |
Procedural and interactional justice perceptions and teamwork quality
|
[
"Social justice",
"Product development",
"Team working",
"Quality",
"Turkey"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\nIntroduction: Cross-functional teams are essential to new product development (NPD) and are used about 75 percent of the time in NPD projects (Griffin, 1997). The tasks involved in NPD require skills from multiple functional areas within the firm (marketing, design, R&D, manufacturing, or production), and representatives from all of these, and sometimes even the customer, work together to accomplish these tasks. NPD researchers have focused much attention on how cross-functional collaboration improves the firm's ability to develop new products efficiently and effectively (Gupta and Wilemon, 1996; Sherman et al., 2000; Lovelace et al., 2001; Keller, 2001).Recent literature has examined the impact of team collaboration on innovative project success (Hoegl and Gemuenden, 2001; Sethi and Nicholson, 2001; Hoegl and Proserpio, 2004). Little prior research, however, has investigated the impact of organizational justice perceptions on the collaborative functioning of cross-functional teams. Research in related settings suggests that exploring this area may increase our understanding of NPD team effectiveness. The extant studies on the impact of organizational justice on team effectiveness (Colquitt, 2004, p. 633; Roberson and Colquitt, 2005; Murphy et al., 2003; Chen and Tjosvold, 2002), examined virtual teams, student teams, and other non-NPD settings, but would appear to have direct implications for NPD team effectiveness.The literature on organizational justice distinguishes two justice constructs. Procedural justice refers to the extent to which decision-making procedures are judged to be fair by those who are subjected to them (Tyler and Lind, 1992), whereas interactional justice is about the quality of interpersonal treatment people receive during the enactment of these procedures (Bies and Moag, 1986, pp. 44). The literature suggests that team collaboration can be improved if team members perceive that they are treated politely and equally (interactional justice), and if the procedures they must comply with are fair (procedural justice). While much research has examined the factors that lead to effective cross-functional teamwork (McDonough, 2000; Holland et al., 2000), little research has attempted to investigate the impacts of interactional and procedural justice on NPD team collaboration. Senior management seeking to increase NPD team performance must recognize that organizational justice may have substantial effects on teamwork quality and team performance.The objectives of this research are twofold. First, we build and empirically test a conceptual model (input-process-output) of NPD team effectiveness (Gladstein, 1984; Hackman, 1987) to establish the relationships between justice perceptions of members of cross-functional NPD teams (inputs), teamwork quality (process), and team performance (output). We define teamwork quality using Hoegl and Gemuenden's (2001) dimensions of collaborative team functioning. We believe this is the first complete model of NPD team effectiveness that explicitly tests the differential impact of interactional and procedural justice on collaboration.Second, we empirically test the moderating effect of environmental turbulence on the process-output relationship. Prior studies suggested that the impacts of collaboration on team performance differ according to turbulence or other environmental conditions (Moorman and Miner, 1997). While the effect of environmental turbulence on NPD has been extensively studied in recent literature (Calantone et al., 2003; Akgun et al., 2006b), the impact of environmental turbulence as a moderator between collaboration and team performance has not yet received attention.From our conceptual model (shown in Figure 1 and described below), we derive and test hypotheses using a sample of 117 firms operating in Ankara and Istanbul, Turkey. We present our analytical results, and discuss theoretical and managerial implications and further research.\nConceptual model: Teamwork quality\nEmpirical study: Sample and data collection procedure\nAnalysis and results: The data were subjected to a purification process in assessing reliability, unidimensionality, and discriminant validity (Anderson and Gerbing, 1988). To assess unidimensionality, measures were divided into four subsets: the six teamwork quality measures (communication, coordination, balance of member contributions, effort, cohesion and mutual support), the two outcome measures (team learning and speed-to-market), the antecedents of teamwork quality (procedural and interactional justice), and the moderator measures (technical and market turbulence) as recommended by Moorman and Miner (1997) due to small sample size. The results showed that all four models fit adequately: the six teamwork quality variables (kh2(89)=90.72, p=0.05, RMSEA=0.07, CFI=0.97), the outcome variables (kh2(119)=180.42, p=0.03, RMSEA=0.08, CFI=0.96), the antecedent variables (kh2(113)=132.22, p=0.04, RMSEA=0.07, CFI=0.93), and the moderator variables (kh2(92)=92.54, p=0.13, RMSEA=0.09, CFI=0.97).A series of two-factor models was generated (see Bagozzi et al., 1991). In these estimations, the factor correlations, considered individually, were restricted to unity and the fit of the restricted models (in total 66 models, 132 pairs of comparisons) was compared with that of the original model. The chi-square change (Dkh2) in all models, constrained and unconstrained, were significant (p<0.05), confirming that the constructs demonstrated discriminant validity.The measures were subjected to further confirmatory factor analysis (CFA) using AMOS 4.0, where all variables, excluding team size, were included in one CFA model. The CFA produced a good fit with an incremental fit index (IFI) of 0.93 and a comparative fit index (CFI) of 0.91 (also, kh2(76)=84.18, RMSEA=0.009).Teamwork quality was operationalized as a second-order construct with six dimensions. A second-order CFA of a model depicting communication, coordination, balance of member contribution, mutual support, effort and cohesion was run. This model yielded acceptable fit indices (kh2(125)=229.65; CFI=0.95; IFI=0.94; RMSEA=0.09). In addition, all first-order and second-order factor loadings were significant, thereby providing evidence that teamwork quality is a multifaceted construct, containing the six facets identified above.Reliabilities of the multiple-item reflective measures, and all correlations, are reported in Table I. The relatively low to moderate correlations provided further evidence of discriminant validity. Skewness and kurtosis values are well below the levels suggested for transformation of variables.An indication of the portion of variance attributable to functional relationships and the portion attributable to the use of common methods is nevertheless desirable in survey research. Podsakoff et al. (2003) suggested a number of statistical procedures whereby common method variance can be checked. Among these procedures is Harman's single-factor test. This test assumes that the first unrotated factor provides a good approximation of common method variance. The higher the amount of variance accounted for by the first factor, the higher is the common method variance. Factor analysis of the study research variables revealed first unrotated factors that accounted for 14.3, 16.8, and 19.3 percent of the variances. These results indicate that the probability of common method variance is limited.To test our hypotheses, a series of multiple regression analyses were performed using team size as control variable. Table II demonstrates the results of the antecedents of teamwork quality in NPD teams. First, each teamwork construct was regressed on the independent variables. A significant positive association was found between procedural justice and all teamwork constructs. Thus, H1a, H1b, H1c, H2a, H2b, and H2c were supported. However, the results showed that only two of the six facets of teamwork quality (coordination and balance of member contribution) are significantly associated with interactional justice (that is, communication, mutual support, effort and cohesion are not). Thus, H3b and H3c are supported, but not H3a, H4a, H4b, and H4c. Moreover, team size showed significant relationships with teamwork quality and its facets. This highlights the particular importance attributed to team size as a structural variable affecting teamwork quality.To assess the moderator role of environmental turbulence between teamwork quality, speed-to-market and team learning, a split group analysis was performed (Arnold, 1982; Moorman and Miner, 1998) by dividing the sample into turbulent and stable markets and technologies via a median split. The relationships between teamwork quality and speed-to-market, and teamwork quality and team learning were then examined in the high-turbulent and low-turbulent conditions (for both market as well as technical turbulence). The regression coefficients from high and low turbulence were compared using a t-test to determine if the coefficients were significantly different across the conditions.As shown in Table III, under both high and low turbulence conditions, teamwork quality has a significant impact on team learning. However, teamwork quality is positively associated with speed-to-market under only extreme conditions in both turbulent market and technical conditions. Also, comparing regression coefficients in low- and high-turbulent conditions shows statistical significance in all cases (p<0.05), indicating that the impact of teamwork quality on speed-to-market is more influential under high-turbulent conditions than low-turbulent conditions. Thus, teamwork quality is an important factor for team learning regardless of turbulence, however, teamwork quality is an important factor for launching a product faster if innovation teams are operating under extreme conditions. These findings partially support H5 (we further explain the findings in the following discussions).\nDiscussion and conclusions: Theoretical contributions\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to understand the role of organizational justice (procedural and interactional justice) as a precursor to new product development teamwork quality and team performance; to study the moderating impact of environmental turbulence on these relationships.\nDesign/methodology/approach: - This is a survey-based empirical study of 117 product/project managers based in Ankara and Istanbul, Turkey. A series of multiple regression analyses were used to obtain results.\nFindings: - Only two of the six facets of teamwork quality (coordination and balance of member contribution) are significantly associated with interactional justice; all six facets (coordination, balance of member contribution, communication, mutual support, effort and cohesion) are associated with procedural justice. Teamwork quality is significantly related to team learning and speed to market; environmental turbulence partially moderates these relationships.\nResearch limitations/implications: - Perceived organizational justice is an important precursor to NPD teamwork quality and team performance. The components of organizational justice (procedural and interactional justice) have different effects on the facets of teamwork quality. The relationships between these precursors and team performance are moderated by environmental turbulence.\nPractical implications: - To generate new products, NPD managers rely on teams that function well together and show good performance (good team learning and speedy time to market). The findings suggest that NPD managers can significantly improve NPD team performance by increasing team members' perceived level of organizational justice.\nOriginality/value: - While organizational justice has been previously shown to influence team performance, this relationship has not yet been examined in an NPD setting. This is valuable because of the overriding importance of well-functioning teams in NPD.\n",
"role": "assistant"
}
] |
Mutual gains or conflicting outcomes? How HRM benefits professionals
|
[
"HRM",
"Performance",
"Well-being",
"Mutual gains",
"Professional employees"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: There are many theories on how to achieve improved employee performance (e.g. commitment-oriented HRM activities and ability-motivation-opportunity), but as yet, there is not general consensus about their efficacy. Indeed, there have been recent concerns expressed in the literature that routes to achieving performance are often extracted by organisations at the cost of employees' happiness and health (Gavin and Mason, 2004). This has spurred resurgence by researchers to explore more closely the relationship between employee well-being, people management and organisational performance (Peccei et al., 2013). We join this research agenda and, with our focus firmly centred on employees' happiness and health, examine the role of mutuality in the HRM-well-being-performance relationship.\n2. Research background and hypothesis development: We developed a conceptual model to examine the relationship between HRM practice, well-being and performance based on the mutual gains model of Peccei et al. (2013). This model posits that HRM practice positively impacts performance and also contributes positively to worker well-being. To clarify the HRM-well-being-performance relationship in terms of different types of well-being outcomes, our model includes two parts. The first focuses on the effects of HRM practices on well-being (work intensification and WLB) and performance (task and contextual). In the second part, we examine the impacts of well-being on performance. Figure 2 illustrates and summarises the relationships included in this model.\n3. Method: 3.1. Sample and procedures\n4. Results: 4.1. Descriptive statistics\n5. Discussion and concluding remarks: 5.1. General discussion\n",
"role": "user"
},
{
"content": "Purpose: Using the mutual gains model as a framework, the purpose of this paper is to explore the important issue of mutuality in employment relationships.\nDesign/methodology/approach: This study uses a sample of 215 New Zealand professionals to assess the relationships between commitment-oriented HRM practice, work intensification, work-life balance (WLB) and task and contextual performance.\nFindings: The authors find commitment-oriented HRM practice does not intensify the work experiences of professionals, but nor does it contribute positively to the achievement of WLB. Both these well-being types do, however, contribute to explaining professionals' task and contextual performance outcomes.\nResearch limitations/implications: The findings suggest current narrow interpretations of well-being need to be revisited, with the meaning of well-being, its measurement and its role in delivering performance outcomes afforded greater attention within HRM studies.\nPractical implications: A primary goal of managers is to deliver optimum performance outcomes. For professionals, the research suggests an important means to achieving this is by promoting positive well-being.\nOriginality/value: This study offers some important insights into the role mutuality plays in influencing performance outcomes. In addition, by exploring two contrasting facets of well-being, one health- and one happiness-related, the authors provide some empirical insights into how employees' well-being affects performance outcomes.\n",
"role": "assistant"
}
] |
Creative tourism: catalysing artisan entrepreneur networks in rural Portugal
|
[
"Creativity",
"Rural",
"Entrepreneurship",
"Networks"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In response to growing interest in artisan entrepreneurship and the role of culture and tourism in its growth (Ratten and Ferreira, 2017), this paper investigates the ways in which some artisans in rural areas in Portugal are being networked and guided to develop and implement creative tourism experiences derived from artisanal activities as part of a national project on creative tourism called CREATOUR. Creative tourism tends to differ from cultural tourism in terms of focus: while cultural tourism focuses on built heritage, museums and monuments, creative tourism focuses on image, identity, lifestyles, atmosphere, narratives and media (Richards, 2011; Triarchi and Karamanis, 2017). The contemporary vision of creative tourism incorporates an expanded focus on interactive connections between travelers and residents and emphasizes the importance of rooting creative actions to place (OECD, 2014; Blapp, 2015; Duxbury et al., 2019). With artists increasingly used to represent, market and enhance the visual image of destinations, this is a timely investigation of the link between artisan activity and tourism (Long and Morpeth, 2016). Furthermore, there is a growing recognition that artisans as businesses and residents in rural areas can serve to diversify rural areas' economic base and thus can be highly valuable in rural development programs (Duxbury et al., 2011).\nLiterature review: This study is informed by research on entrepreneurship, artisans, creative tourism and the rural development in this context. Beginning with entrepreneurship as an economic function and contextualized by growing attention to alternative conceptualizations of entrepreneurial roles, research on artisans is synthesized, including the types of entrepreneurship in which they engage and the link between creativity and economics. Following this, creative tourism is investigated as a realm of activity that can provide artisans with an avenue to remain profitable. Guided by the question of how artisans operate, the concept of the creative tourism ecosystem is introduced, wherein the functionalities of networks are explored. Finally, the rural context in which the artisans are situated within is examined, highlighting the complexity of modern conceptualizations of the rural, the blurring of boundaries between urban and rural and the concepts of \"global countryside\" and \"creative countryside,\" as discursive tools to articulate what \"the rural\" is today.\nConceptual framework: The conceptual framework used for this paper, which adopts an interpretivist methodology, emerged from the data and a critical review of literature on creative tourism, artisan entrepreneurship and rural cultural work. Two main theoretical concepts are used to frame the research in this paper: social embeddedness and the entrepreneurial ecosystem. More specifically, this paper uses the idea that social networks influence economic activity to frame an investigation into some of the factors influencing artisan entrepreneur-mediators' ability to catalyze links between artisans and tourists. Rather than attempting to develop a theory of social embeddedness, this research aims to explore the nature of embeddedness to gain insight into the entrepreneurial process within a specific context. Social embeddedness is, hence, used as an analytical tool to research the influence of social networks on artisan entrepreneur-mediators' entrepreneurial action.\nMethods: This study focused on five pilot projects that are part of the CREATOUR network. Table I shows the main characteristics of the five selected pilots. These pilots were chosen because of their geographical representation (from each mainland Portuguese region) and their role as entrepreneurs who are connecting artisans to tourism.\nFindings: Describing the ways in which artisan entrepreneur-mediators connect with their artisans, the roles they take on, and the nature of this collaborative relationship allows for an examination of how these creative tourism networks operate \"on the ground.\" We find that the artisan entrepreneur-mediators in this study take on leading roles in connecting artisans to creative tourism. They present their vision to the artisans and are responsible for advertising the events, registering participants and collecting registration fees, booking the physical space where the workshops take place, and acting as guides. The artisan entrepreneur-mediators take on these roles out of necessity. As the proprietor of Quico Turismo says, \"I deal with the bureaucracy so the artists can be creative\" (interview, March 13, 2018), reflecting the perception that \"economics crowds out creativity\" (Werthes et al., 2017, p. 291).\nAnalysis: social embeddedness in a neo-rural context: Drawing from recent literature discussing the current rural discourse can help to conceptualize creative tourism entrepreneurial ecosystems and the role of artisan entrepreneur-mediators within them. In this context, two aspects appear to be critical: connection to place and type of community engagement. These aspects underpin the level of social embeddedness of the artisan entrepreneur-mediators, which, in turn, influences their role in connecting artisans to creative tourism and contributes to understanding the key challenges they face.\nConclusion: Artisans often create their own entrepreneurial conceptualizations and this research fills gaps in literature on how artisans connect to the tourism industry and what their entrepreneurial ecosystems looks like. The authors find that artisans are becoming integrated into creative tourism, and investigate the multiple roles of entrepreneurs that act as mediators between artisans and emerging creative tourism initiatives. The term \"artisan entrepreneur-mediators\" is introduced to describe this missing idea within artisan entrepreneurship literature. It is revealed that artisan entrepreneur-mediators act as networking agents by organizing and offering creative tourism experiences that link rural artisans to tourists.\n",
"role": "user"
},
{
"content": "Purpose: Given limited research about how artisans become integrated into tourism, the purpose of this paper is to investigate the emergence of artisan entrepreneur-mediators who link artisans to tourism in rural areas and small cities in Portugal. Using social embeddedness as a conceptual framework, this paper views artisan entrepreneur-mediators as existing within an entrepreneurial ecosystem. The paper investigates their role within this ecosystem and how social networks influence the artisan entrepreneur-mediators' roles in connecting artisans to creative tourism.\nDesign/methodology/approach: The paper is based on new (2017 and 2018) empirical evidence developed through two rounds of semi-structured interviews of five artisan entrepreneur-mediators.\nFindings: This paper finds that artisan entrepreneur-mediators in rural areas or small cities take on multiple roles as networking agents who organize and offer creative tourism experiences, providing the missing link between artisans and tourists. An analysis of the nuances of the operations of these artisan entrepreneur-mediators suggests that high levels of social embeddedness within local rural communities are important in order for these neo-rural entrepreneurs to attain their goals.\nOriginality/value: Originality lies in the identification of a gap in artisan entrepreneurship literature in a rural context. It is the first time that a critical analysis of artisan entrepreneur-mediators who facilitate the link between artisans and tourism is carried out in terms of social embeddedness, their roles and connections to creative tourism, and types of community engagement.\n",
"role": "assistant"
}
] |
Management as an integrating activity: A comparative textual analysis of the work of Mary Parker Follett and Oliver Sheldon
|
[
"Integration",
"Mary Parker Follett",
"Oliver Sheldon",
"Textual data analysis"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: It is critical that pioneers of any discipline reflect on the foundations of their respective fields (Verma, 1994). With this in mind, it is unsurprising that the first management theorists and practitioners were those that wrote the most on the \"fundamentals of management\" (Wren and Bedeian, 2009). As such, the 1920s can be considered as a major moment for management studies or a seminal milestone in the history of the development of ideas in management. Numerous markers attest the fact that this decade was decisive not only in fostering a general will to professionalize the activity of management but also in establishing the principles fundamental to moving from an empirical to a structured approach of management. Thus, the activities of the Taylor Society and the American Society of Mechanical Engineers (ASME) during this time both demonstrate a desire to equip the discipline with a structured body of principles (Brown, 1925; Sinclair, 1980). Moreover, the period also boasted a number of militant and active contributions from management practitioners. Although some of these individuals were critical of scientific management, they all shared a common desire to provide managers with skills and expertise so as to increase the efficiency of organizational systems, all in the context of increasingly turbulent business; the case of Henry Dennison's contributions and experiments is typical in this respect (Bruce, 2015).\nMary Parker Follett: from social entrepreneurship to management consulting: Mary Parker Follett was an atypical writer, and as such, is difficult to classify into any particular category (Davis, 1989). In her work, she contributed to a renewed vision of democracy and to a novel understanding of management. One could even say that Follett questioned the general question of \"living together\".\nOliver Sheldon: from rational organizing to community service: Oliver Sheldon led several different lives. He was a talented manager and one of the key individuals responsible for the reorganization of the Rowntree & Co. in 1920s York, England. He also significantly contributed to the dissemination of the ideas and principles of scientific management throughout England at that time. However, in England, Sheldon is perhaps best known for his decisive contribution to the foundation of the University of York and the preservation of the cultural and historical heritage of the City of York.\nA textual analysis of Follett and Sheldon's writings: The objective of the present research is to compare the management philosophies that permeate the work of these two thinkers. In doing so, we conduct a comparative analysis of a voluminous number of qualitative material, which itself requires that we follow a systematic method of analysis so as to guarantee our study's significance. For these reasons, we have favoured a statistical treatment that covers all writings from both authors that deal with management. In doing so, we assume that the understanding of management philosophies could be obtained through the analysis of relationships between words contained in these writings. Other paths that we did not follow, allow access to the comparison of management philosophies, such as the analysis of arguments and meaning (such as cognitive analysis, discourse analysis, etc.).\nTwo different ways of speaking about management as an integrating activity: Our textual analysis enables us to identify two thematic universes. They characterize the fundamentals of management according to Sheldon and Follett. In doing so, we uncover two dimensions of one same fundamental philosophical view of management, which is based on the concept of integration, defined as the process by which people are becoming part of an organization and organizations are becoming part of their environment. The concept is clearly mentioned (as a core concept) by Follett when it is only suggested by Sheldon.\nConclusion: This research has been based on an exhaustive lexical analysis of the written works of Sheldon and Follett. It demonstrates these authors' singularities, which are linked to their respective biographical trajectories. Scientific management profoundly marked Sheldon's activities while Follett's social entrepreneur experience, has been a source of inspiration of her conception of social interaction and management. The two authors share a focus on the concept of integration. For Sheldon, management is an integrative activity that consists of simultaneously serving the community and making the most rational use of resources. For Follett, integration is at the heart of the activity of management, in that it consists of the activity of relating, itself rooted in processes of reciprocal influence between individuals.\n",
"role": "user"
},
{
"content": "Purpose: This research aims to compare the contributions of two authors and practitioners from the 1920s whose work was, to a certain extent, at odds with the dominant scientific management approach of the period.\nDesign/methodology/approach: In this paper, a comparative textual analysis of texts written by Sheldon and Follett in the 1920s has been performed. This technique consists of a hierarchical descendant classification, which we use to uncover the thematic universes that Sheldon and Follett create in characterizing the fundamentals of management activity.\nFindings: This comparative textual analysis shows that Follett and Sheldon developed two different ways of relying on a singular fundamental principle of management: integration.\nOriginality/value: A comparative analysis of Follett and Sheldon's work has never been attempted. While textual analysis has been used in management research, to the authors' knowledge, such analyses are rare in research seeking to understand management history.\n",
"role": "assistant"
}
] |
Application of Bow-tie methodology to improve patient safety
|
[
"Patient safety",
"Intensive care unit",
"Bow-tie methodology",
"Proactive risk assessment"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Serious adverse events and medical errors threatening patient safety occur more frequently in intensive care unit (ICU) setting as compared with other non-critical settings in hospitals (Clancy, 2007). Medication errors and adverse drug events (ADEs) are causing significant harms to patients hospitalized in the ICUs (Valentin et al., 2006; Clancy, 2007). In addition, critically ill patients are particularly vulnerable to iatrogenic injuries such as nosocomial infections, because of their medical complexity and needs for high-risk interventions (Rothschild et al., 2005). However, related studies revealed that around half of the adverse events in ICU are preventable (Rothschild et al., 2005; Kopp et al., 2006). This fact necessitates hospitals taking a more proactive approach to identify risks and implement strategies to alleviate their undesirable effects (Kessels-Habraken et al., 2010).\nMethodology: Setting and design\nFindings: Identify the hazards\nDiscussion: Applying risk analysis methods was proposed as an effective strategy by governing bodies to reduce harms to patients (Briner et al., 2013). Patients admitted to intensive care setting are more likely to experience adverse events as compared to others in non-critical settings (Boyle et al., 2006). However, evidence suggests that about half of the adverse events are preventable (Stockwell and Slonim, 2006). This fact emphasizes the need of applying PRA methods which can reduce or remove contributing factors associated with the incidence of the adverse events (Clancy, 2007). The PRA methods should be seen in terms of their main goal, reducing errors by way of identifying contributing factors to errors and proposing appropriate strategies to address these factors. However, most of the times, they are applied in healthcare setting just for identifying errors without considering appropriate strategies to reduce or mitigate the errors (Lyons, 2009).\nConclusion: In conclusion, Bow-tie methodology proves feasible for proactive risk management in the ICU. Its adaptability could be enhanced through the use of a number of tools. Based on the system approach, the Bow-tie allowed the generation of practical solutions to address work system deficiencies as well as overcoming and elevating them. It was also successful in promoting clinicians' awareness about errors and errors provoking conditions within their environment and practice. Visualization of causes, barriers, and consequences in Bow-tie diagrams facilitated comprehension of hazard prevention and protection required for safe operations in the clinical settings. Generally, Bow-tie appears capable of playing a useful role in safety improvement programmes.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to apply Bow-tie methodology, a proactive risk assessment technique based on systemic approach, for prospective analysis of the risks threatening patient safety in intensive care unit (ICU).\nDesign/methodology/approach: Bow-tie methodology was used to manage clinical risks threatening patient safety by a multidisciplinary team in the ICU. The Bow-tie analysis was conducted on incidents related to high-alert medications, ventilator associated pneumonia, catheter-related blood stream infection, urinary tract infection, and unwanted extubation.\nFindings: In total, 48 potential adverse events were analysed. The causal factors were identified and classified into relevant categories. The number and effectiveness of existing preventive and protective barriers were examined for each potential adverse event. The adverse events were evaluated according to the risk criteria and a set of interventions were proposed with the aim of improving the existing barriers or implementing new barriers. A number of recommendations were implemented in the ICU, while considering their feasibility.\nOriginality/value: The application of Bow-tie methodology led to practical recommendations to eliminate or control the hazards identified. It also contributed to better understanding of hazard prevention and protection required for safe operations in clinical settings.\n",
"role": "assistant"
}
] |
Revenue management research in hospitality and tourism: A critical review of current literature and suggestions for future research
|
[
"Tourism",
"Revenue management",
"Literature review",
"Hospitality",
"Yield management",
"Hospitality research"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Revenue management's (RM) existence in the hospitality and tourism field does not go back far into history. It started in the airline industry as yield management in the 1970s where the core principles were aimed at maximizing yield per available seat (capacity utilization) (Cross et al., 2009; Donaghy et al., 1995). However, as the principles were adapted into the hotel industry and other service industries in the mid-1980s, it became more appropriate to redefine yield as revenue per available inventory to reflect the extension of the concept to a strategic level (Cross et al., 2011; Hayes and Miller, 2011; Jauncey et al., 1995; Kimes, 1989b). The earliest academic papers relating to hospitality RM were written by Orkin (1988), Kimes (1989a) and Relihan (1989). In Kimes' (1989a, p. 15) seminal piece of work, she defined RM in the hospitality industry as \"the process of allocating the right type of capacity to the right kind of customer at the right price so as to maximize revenue or yield\". Till date, this definition is still the most widely cited and has only been modified slightly to include \"at the right time\" (Kimes, 2000) and \"through the right distribution channel\" (Hayes and Miller, 2011). In several other publications (Kimes, 2000; Kimes et al., 1998; Weatherford and Bodily, 1992), the conditions or characteristics amenable to effective implementation of RM have been identified. These include perishable inventory, limited capacity, fluctuating demand, proper customer segmentation, availability of advanced bookings and low marginal sales cost.\n2. Literature review: 2.1 A unified framework for evaluating hospitality tourism RM research\n3. Methodology: To provide a comprehensive understanding of the recent hospitality and tourism RM research, this study adopted a systematic process of reviewing the literature using content analysis as the main technique. The systematic process involved locating existing studies that are relevant to this study, evaluating and selecting the suitable studies, analyzing and synthesizing their results and organizing the findings in a manner that allows clear and logical conclusions to be drawn regarding what is known and what is yet to be known (Denyer and Tranfield, 2009). To locate the relevant articles for this study, keywords searches were conducted in three major online databases frequently used by hospitality and tourism researchers. These selected databases were EBSCOhost (used by Hung and Law, 2011), ScienceDirect (used by Tsang and Hsu, 2011) and ProQuest Business (used by Tsai et al., 2011). The list of keywords included capacity control, capacity management, competition, competitive set, distribution channel, distribution management, demand forecasting, duration control, inventory control, inventory management, market share, overbooking, price, pricing, rate, revenue management, revenue manager, segmentation and yield management. These keywords were generated based on the RM framework provided in the literature review. Each keyword was also combined with the following hospitality organizations: hotels, restaurants, cruise, casinos, attractions and theme parks, meeting and function space, spas, golf and travel/tour agencies. The search was limited to publications in hospitality and tourism journals identified in McKercher et al.'s (2006) study that have used one or more of the keywords in the title, abstract or keywords. Another limitation of the search was that it was restricted to publications within the past 10 years (2004-2013). This 10-year period (2004-2013) was chosen as an extension to Kimes's (2003) review so as to provide an up-to-date knowledge on hospitality and tourism RM research which also goes beyond her own publications.\n4. Findings and discussions: The findings of this study are organized into three parts. The first part provides an overview of the papers that were reviewed. The second part presents a classification of the literature according to the unified framework for hospitality RM described in the literature review. The final part summarizes the gaps in the literature and highlights the directions for future research.\n5. Concluding remarks: This paper has examined RM research in hospitality and tourism industry in the recent decade from the perspectives of the core activities of RM and the factors that influence the RM process. The purpose was to contribute to a concrete understanding of the aspects of the hospitality RM literature that have engaged the attention of researchers lately and to suggest directions for future research by identifying research gaps. Within the 10-year period that this study focused on, 158 full-length articles were found to have been published on hospitality revenue management. Majority of these studies were on hotel revenue management (63.9 per cent) and restaurants (15.2 per cent). Various topic were covered by these studies and they were appropriately classified into seven major themes - businesses analysis, pricing, demand forecasting, inventory optimization, booking controls, distribution channel management, and performance analysis and evaluation - constituting the core activities of hospitality RM process and another eight themes - business strategy, customers, employees and human resources, technology, competitive forces, legal, economic and sociocultural factors - constituting the factors that influence RM. Of these classifications, pricing, customer and distribution channel management studies were the dominant topics followed by inventory optimization, demand forecasting and applications of RM in various hospitality organizations.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this study was to examine hospitality revenue management (RM) research in the recent decade, to identify emergent issues/topics and suggest directions for future research.\nDesign/methodology/approach: A systematic process of literature review involving content analysis was adopted. The process involved searching for published articles in three major online databases for hospitality and tourism journals, evaluating and selecting the relevant articles, analyzing and synthesizing the findings of the selected papers and organizing the findings to determine what is known and what is yet to be known.\nFindings: The paper established a broad range of topics that have engaged the attention of hospitality and tourism researchers in contemporary times. These topics were structured into seven major themes constituting the core activities of hospitality RM process and another eight themes covering the factors influencing the practice of RM.\nPractical implications: The classification of the literature into core activities of RM process and factors influencing RM serves as a useful guide for practitioners and academics to trace relevant literature on various aspects of RM and to visibly notice the gaps in the existing literature.\nOriginality/value: By developing an RM framework and using it to evaluate existing literature, this study brings cohesion into the hospitality and tourism RM literature.\n",
"role": "assistant"
}
] |
Teachers' commitment focuses: a three-dimensioned view
|
[
"Job satisfaction",
"Teachers",
"Schools",
"Turkey"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nTheoretical background: There are various definitions of commitment in the related literature stemming from various approaches to this concept. According to the affective approach; commitment is \"the relative strength of a person's identification with and involvement in an organization\" (Mowday et al., 1982). According to Buchanan (1974); commitment is \"a partisan or affective attachment to the aims and values of an organization, to one's role in relation with these aims and values and to an organization for its own sake\". According to the cost-benefit approach; commitment is \"a result of the perception of benefit asociated with staying in and the perception of cost associated with leaving from an organization\" (Kanter, 1968). According to the normative approach; commitment is \"the aggregate internalized normative pressures to conduct in a manner which meets organizational objectives and interests\" (Wiener, 1982).Commitment is not a monolithic but rather a multi-faceted concept. There are many focuses to which a person's commitment is directed. Commitment to coalitions and constituencies within an organization (managers, owners, customers, rank-and-file employees etc.) (Reichers, 1985), commitment to top managers and supervisors (Gregersen, 1993), commitment to career (Chang, 1999), commitment to unions (Deery et al., 1994), commitment to a programme (Neubert and Cady, 2001), commitment to an occupation or profession (Vanderberg and Scarpello, 1994; Rowlinson, 2001; Wallace, 1993), commitment to job (Aremu and Adeyoju, 2003) and commitment to workfellows (Yoon et al., 1994) can be deemed among these focuses.In the perspective of Meyer and Allen (1997), there are three types of commitment; affective, continuance and normative commitment which will be explained as follows:Affective commitment\nMethodology: In this study survey based descriptive scanning model was used. The population of the study consists of high school teachers who work in Elazig city center (total 1,124 teachers) in 2005 academic year. All the population was thought to be reachable, therefore a sample was not taken and the population was directly worked on. While the questionnaire was administered, 1,017 teachers of 1,124 (which form the population) was reached and 983 of the questionnaires (87.4 percent of the population) was validated and evaluated.The questionnaire was inspired by Meyer and Allen's \"three dimensioned organizational commitment scale\". This scale was rearranged to measure the focuses of commitment and necessary regulations were made on the items. After the literature on organizational commitment was examined, the questionnaire draft was developed by the investigators. The opinions of specialists and teachers were taken about the content and the language of this draft and necessary improvements were made. The final case of the questionnaire had 21 items. There were three dimensions and each dimension had seven items.After the questionnare was administered, factor analysis test of Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO) and Cronbach Alpha Coefficient was computed. According to the results, KMO was 0,855 and Cronbach Alpha Coefficient was 0,83 for the overall questionnaire. Reliability analysis results for each item and generally for each dimension are showed in Table I.The answers given to the questions were graded as never (1); rarely (2); sometimes (3); usually (4); and always (5). Data were analyzed by using SPSS programme, their standard deviations and arithmetic averages were computed and the techniques of t-test (for the variables of gender and marital status) and One-Way ANOVA (for the variable of tenure) were used in order to determine if there were any meaningful differences according to these demographic variables. For the interpretation, arithmetic averages were graded as; never, 1.00-1.80; rarely, 1.81-2.60; sometimes 2.61-3.40; usually 3.41-4.20; always, 4.21-5.00.\nThe findings and interpretations: A general view\nConclusion and discussion: There are many focuses to which a teacher's commitment is directed. Teaching profession, the school in which they work and work group can be deemed among these focuses. Each of these focuses has its own antecedents. So a teacher may be committed to one of them while not committed to another. To strengthen teachers' commitment totally, and thus to elicit desirable work outcomes, it would be helpful to strengthen their commitment to the various focuses. In this study these focuses have been examined in a three dimensioned view. As a result, it was seen that teachers' commitment focuses and their kinds of commitment to one of these focuses can vary according to their personal characteristics such as gender, marital status and tenure.When we look at the findings in general; teachers have high levels of affective and normative commitment to the teaching profession but they have low levels of affective and normative commitment to the focuses of work group and the school in which they work. Although teachers like this profession, they have some problems with their work groups and their schools. Also, leading them to the low levels of continuance commitment to this profession, teachers assess their incomes and reputation they get from this profession as dissatisfactory. It reveals the fact that there is a need for educational authorities to raise teachers' incomes and to make necessary arrangements for this profession to be more reputable and attractive. At school level, to raise teachers' commitment levels to the focuses of work group and to the school in which they work, school leaders may attempt to make more cohesive work groups and for teachers to be more satisfied from school.While teachers are affectively and normatively committed to the teaching profession, they have low levels of continuance commitment (due to the lack of income and reputation) to the same focus. As seen, employees can be committed to a focus in one dimension while not committed to the same focus in another dimension. If leaders want to benefit from their employees' efforts optimally, they need to detect and restore the defects within the commitment construct and reinforce employees' attachment to a focus in all aspects.Female teachers are more affectively and normatively committed to the teaching profession than males. Females like this profession and find it more fitting to their personalities. However, females have low levels of normative commitment to the work group and low levels of continuance commitment (based on lack of investments) to the school in which they work. These implicate some problems related to females' integration with the work group and their weak bonds to the school in which they work. For schools to be more effective, managers may attempt to make female teachers more integrated with the work group which seems as if a male dominated community. Besides female teachers may be supported and aided to strengthen their bonds with the school especially by relieving their outside school burdens (child care etc.). As Yoon et al.'s (1994) research revealed, employees' levels of commitment to their work group increase their levels of commitment to their organization. So, if female teachers' integration problems with the work group can be solved, their bonds with the school can be strengthened more easily.Married teachers are less affectively and normatively committed to the teaching profession than unmarried ones. Their affective and normative bonds to this profession may be weakened by outside school burdens and responsibilities (family, child etc.). Married teachers' continuance commitment levels to the teaching profession and to the school in which they work are higher than unmarried ones. As it was found similarly at Mellor et al.'s (2001) research, married teachers perceive higher monetary costs associated with leaving this profession and they perceive higher their investments in the school in which they work. While they do not dare to leave this profession because of their familial responsibilities, they do not want to venture their accumulated investments in their schools. As it was indicated at Haar and Spell (2004), Shore and Wayne (1993) and Fuller et al.'s (2003) researches, for strengthening the married ones' weak affective and normative bonds to this profession, they may be treated in a supportive manner (family supporting treatments etc.) that makes them feel valued.As our findings and abovementioned previous researches showed, as employees' external burdens and responsibilities increase, their affective and normative bonds to their professions (or organizations) weakens; while their perceptions of cost associated with leaving increase, leading them to the higher levels of continuance commitment. Continuance commitment alone does not yield beneficial results for organizations. Their affective and normative bonds having been weakened, employees may feel trapped by the organization and they display involuntary behaviours only stemming from compliance and calculation. Employees's negative feelings towards a focus might spill over to other focuses nested within organizations (Meyer and Allen, 1997). To prevent such undesirable results, it would be helpful for administrators to take care of their employees' life within and outside the organization and to take necessary measures (in terms of supportive or relieving treatments) when a dramatic change occurs in their life.Corroboratively with Allen and Meyer's (1993) research findings; as tenure increases, perceptions of investments having been made in schools increase and therefore teachers' continuance commitment levels to the focus of the school in which they work increase. 16-20 year tenured teachers have the highest levels of affective comitment (based on retrospective rationality and positive perception of person-job fit) and continuance commitment (based on monetary benefits) to the focus of teaching profession.Employees develop behavioural commitment to their organizations as the times go on and they feel themselves to be bound to continue with the organization. Then they attempt to justify their actions, retrospectively, by developing emotional attachment to their organizations (Meyer and Allen, 1997). As our research also showed, affective commitment develops in time based on retrospective rationality and justification processes. Also, as age and tenure increase, employees' perceptions on the cost of leaving increase, leading them to develop continuance commitment (Abdulla and Shaw, 1999; Allen and Meyer, 1993). If the necessary conditions (e.g., volition, quality of job, irrevocability of the initial act) do not exist, tenured employees may not develop affective commitment (Meyer and Allen, 1997). As we have mentioned earlier, continuance commitment alone might cause unwanted consequences for organizations. So, it would be helpful for administrators to be attentive to the changes that might affect on employees' psychological states (commitment is a psychological state that develop retrospectively) and to intervene when necessary for the sake of organizational objectives by trying to manage internal external conditions to induce affective rather than continuance commitment.Although 1-5 year tenured teachers have the highest levels of normative commitment to the teaching profession, they are the least affectively and normatively committed ones to the focus of work group. This implicates the importance of socialization efforts for school leaders to make the new teachers more attached to the work group affectively and normatively. Allen and Meyer (1990) found that newcomers' socialization experiences are positively related to organizational commitment after six months they have been on their jobs. After employees adapt themselves to their new roles and the new environment, they develop commitment to some related focuses. But the length of this adaptation period may differ from one organizational settting to another and may be affected from the dynamics of the work group and administrators' abilities to manage these dynamics. As Wiener (1982) indicated, trying to make compatible employees' values, beliefs and perceptions with the ones of work group through socialization efforts is an important component of managing organizational commitment.Findings reveal that both new teachers and female teachers have weak bonds with their work groups, while both groups of teachers have strong bonds with the teaching profession. As Vanderberg and Scarpello's (1994) research showed, there is a strong positive relationship between commitment to profession (or occupation) and commitment to organization. But commitment to profession may not solely cause employees to develop higher organizational commitment (Wallace, 1993). If it is combined with work group commitment, through effective group processes and synergy, it may produce more desirable results. If leaders want to create more cohesive work groups, comprising of members having high levels of work group commitment, it may be helpful for them to have group leadership skills especially in terms of building bonds, teamwork and collaboration.Previous research revealed that employees' commitment levels to a specific focus have significant influence on their attitudes and behaviours related to this specific focus (Snape et al., 2006; Vanderberghe et al., 2004). The importance of a specific focus and focus-related outcomes may differ from one organizational setting to another, because the way organizations are structured and operate may render some focuses more salient than others (Vanderberghe et al., 2001). The commitment focuses and the related variables we examined in this study may differ in different levels of schools or in other organizations. For the effectiveness of organizations; administrators should analyze and determine the most important focuses that are worth for them to be committed and know how employees' commitment focuses are related to which variables; they should manage it effectively for the sake of organizational objectives and they should take it into consideration during determining their policies and strategies. We hope this study gives an insight to understand the subject matter.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this research is to determine high school teachers' organizational commitment levels, their commitment focuses and variables to which their commitments are related.\nDesign/methodology/approach: - A survey-based descriptive scanning model was used. The study was carried out in Elazig city on teachers working in public and private high schools. Taking in the whole population, the questionnaire was administered to 1,017 teachers.\nFindings: - The results show that teachers' commitment focuses, their types and levels of commitment to these focuses vary according to their personal characteristics such as gender, marital status and tenure. Although female teachers are more affectively and normatively committed to the teaching profession than their male counterparts, they have low levels of normative commitment to the work group and low levels of continuance commitment (based on lack of investments) to the school in which they work. Married teachers are less affectively and normatively committed to the teaching profession than unmarried ones. However, married teachers' continuance commitment levels to the teaching profession and to the school in which they work are higher. As tenure increases, perceptions of investments having been made in schools increase and therefore teachers' continuance commitment levels to the focus of the school in which they work increase. Although one-to-five year tenured teachers have the highest levels of normative commitment to the teaching profession, they are the least affectively and normatively committed to the focus of work group.\nResearch limitations/implications: - The focus of the study is teachers working at high schools. Teachers working at various school levels may be committed to different focuses or to the same focus at different levels. It may be that a larger study across school levels would have revealed differences across them. Also, the underlying reasons why some teachers are committed to some focuses may be probed more profoundly.\nPractical implications: - Keeping in mind the importance of teachers' commitment to various focuses and its effects on school effectiveness, educational leaders should take necessary measures to remedy the troubles which cause teachers' lack of commitment. In this context, school leaders may attempt to strengthen: female teachers' weak normative bonds to the work group, married teachers' weak affective and normative bonds to the teaching profession, and new teachers' weak affective and normative bonds to the work group. The findings reveal the need for more supportive and integrative managerial actions to raise teachers' levels of commitment. School leaders may be more concerning and develop special strategies contingent on their employees' personal characteristics to create high commitment workplaces.\nOriginality/value: - The relevant literature shows that the types and levels of teachers' commitment focuses are quite an under-researched area and the study has contributed to one's understanding of these issues.\n",
"role": "assistant"
}
] |
How healthcare organisations can act as institutional entrepreneurs in a context of change
|
[
"Canada",
"Healthcare",
"Management strategy",
"Organisational theory",
"Leaders",
"Change process"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Institutional theory is one of the most prominent approaches used for understanding organisational phenomena (Battilana et al., 2009; Greenwood et al., 2008; Scott, 2000). Whereas early institutional studies (c. 1950) were interested in actors' agency, later studies focused much of their attention on how environmental context influenced actors (DiMaggio and Powell, 1983; Tolbert and Zucker, 1983). Following DiMaggio's (1988) suggestion that institutional theory should reincorporate an \"agency\" concept, scholars have become increasingly interested in understanding how organisations also shape their environment. In seeking to understand better how innovative behaviours emerge, how individuals and organisations behave strategically, and how they influence change in institutionalised environments, scholars have developed the concept of institutional entrepreneurship (Beckert, 1999; Oliver, 1991). Institutional entrepreneurs, whether organisations or individuals, are described as actors who put forward ideas that diverge from dominant models (Battilana et al., 2009) and thereby introduce change in established routine practices.\n2. Theoretical background: institutional theory: Several authors have sought to understand not only how environment exerts a determining influence on organisations, but also how organisations in turn influence their environment. This concept, known as the \"paradox of embedded agency\" (Seo and Creed, 2002), reflects the tensions that exist between institutional determinism and agency with respect to specific action (Battilana et al., 2009; Greenwood and Suddaby, 2006). This paradox combines the key characteristics of neo-institutionalism (environmental pressures) and earlier work on agency (Haveman and David, 2008). Thus, a central challenge for institutional theory is to show how and why embedded actors become motivated and able to envision new practices and then subsequently get others to adopt them (Greenwood and Suddaby, 2006). In this section, we briefly describe the environmental pressures that influence organisational behaviour. Then we present the concept of institutional entrepreneurs. Finally, we introduce a conceptual model that can help in understanding the process of institutional change, by seeking to bridge what have come to be termed \"old\" and \"new\" institutionalisms in organisational analysis.\n3. Methodology: 3.1 Research setting\n4. Empirical findings: As mentioned, the mandated reform was based on an ideological shift from a services-based to a population-based logic. The new CSSSs were expected not only to provide services to those who requested them (services-based), but more broadly to assume responsibility for the health status and needs of the population in their geographical territory (population-based). This shift generated ambiguity by destabilising long-standing practices and, by opening minds, created a window of opportunity to create new practices in the field.\n5. Research and practice implications: 5.1 Research implications\n",
"role": "user"
},
{
"content": "Purpose: The aim of this paper is to illustrate and discuss how healthcare organisations can act as institutional entrepreneurs in a context of change.\nDesign/methodology/approach: The authors conducted an in-depth longitudinal case study (2005-2008) of a healthcare organisation in the province of Quebec, Canada. Data collection consisted of real-time observations of senior managers (n=87), interviews (n=24) with decision-makers and secondary data analysis of documents.\nFindings: The paper reports on the extent to which entrepreneurial healthcare organisations can be a driving force in the creation of a new practice. The authors analyse the development of a diabetes reference centre by a healthcare organisation acting as an institutional entrepreneur that illustrates the conceptualisation of an innovation and the mobilisation of resources to implement it and to influence other actors in the field. The authors discuss the case in reference to three stages of change: emergence, implementation and diffusion. The results illustrate the different strategies used by managers to advance their proposed projects.\nResearch limitations/implications: This study helps to better understand the dynamics of mandated change in a mature field such as healthcare and the roles played by organisations in this process. By adopting a proactive strategy, a healthcare organisation can play an active role and strongly influence the evolution of its field.\nOriginality/value: This paper is one of only a few to analyse strategies used by healthcare organisations in the context of mandated change.\n",
"role": "assistant"
}
] |
A transformed fixed effect stochastic frontier approach for productivity evaluation in Indian electricity sector
|
[
"Reform",
"Unobserved heterogeneity",
"Stochastic frontier analysis",
"Firm-level panel data",
"Indian electricity sector",
"Total factor productivity"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In the past two decades, the Indian power sector has witnessed structural reforms through several landmark regulatory changes. Along the lines of power sector reforms elsewhere internationally, primarily reforms in India also emphasized un-bundling of vertically integrated utilities into functionally separate entities dealing with generation (production) and transmission and distribution (T&D) (service). The reforms also attempted to attract private capital to the sector. The primary policy objectives of initiating reforms in the sector were anticipated efficiency gains and cost reduction. Therefore, an empirical assessment of utility-level productivity changes in the generators and T&D utilities shall reveal the extent to which reforms have influenced the sector in the intended direction. However, pan-India measurement of productivity changes across the power sector value-chain poses two key challenges. First, utility-level heterogeneity due to diversity in geography, local regulations, technology employed and other unobserved factors makes pan-sector (and pan-India) measurements prone to omission-bias. Second, due to relatively lax regulatory requirements of central collection and maintenance of utility-level operating data in India, productivity measurements have to rely on data collated from multiple sources or estimated from aggregate numbers. Hence, a majority of extant research studies investigating efficiency changes in utilities in the Indian power sector have been confined in scope to specific geography, firm or technology.\n2. Deregulation and measuring utility-level productivity change: The global wave of restructuring since the early 1990s systematically brought about significant changes in industry structure, ownership pattern and mode of regulation in the electricity sector in several countries. A common feature in many of these reforms initiatives is the dismantling of vertically integrated utilities, thus, separating generation of electricity (production) from T&D (service), such that coordination of demand-supply, after restructuring, happens over a specialized market-based mechanism. It is suggested that the introduction of competition and market-based transactions in the sector is made on ex ante anticipation of improvement in technical efficiency, reduction in operating costs and hence positive welfare gains (Joskow and Schmalensee, 1983). Wolfram (2005) argues that restructuring would lead to efficiency gains because of: the new incentives faced by the incumbents to improve efficiencies; takeover of inefficient older plants and the arrival of new entrants with newer technologies; and competition driving efficient allocation of factors of production. Thus, in the restructured industry setting, ex post measurement of efficiency improvements for utilities across the electricity value-chain provides a basis for critical empirical scrutiny of the reforms policy.\n3. Indian power sector reforms: The Indian Electricity Act 1910, promulgated primarily to ensure safety, was the earliest legislative attempt to regulate the working of the Indian electricity industry. After independence, the Indian Constitution accorded concurrent status to the electricity sector, thus, placing it simultaneously under the responsibility of the central and state governments. Subsequently the Electricity (Supply) Act 1948 came into effect that paved the way for the formation of vertically integrated government owned agencies: the State Electricity Boards (SEBs), entrusted with the responsibility of generation, T&D of electricity in the respective states. However, the power sector continued to be characterized by capacity underutilization, inefficient operations and financially imprudent pricing policies. This consequently leads to chronic shortages, poor quality of supply, frequent breakdowns and bankruptcy of the SEBs. The genesis of reforms in the power sector can be broadly traced to these deteriorating conditions. Arun and Nixson (1998) present a detailed discussion on the nature and genesis of reforms, beginning the amendments to the Electricity Act 1910 and Supply Act 1948 in the year 1991, which primarily opened up the sector to private investments. Subsequently, in 1998, the electricity Regulatory Commissions Act was enacted resulting in the setting up of the Central Electricity Regulatory Commission (CERC) and other state-level regulatory bodies. While primarily CERC was concerned with the regulation of firms owned and operated by the central government, it also regulated coordination activities for entities spanning multiple states. However, these early attempts hardly made any substantial impact on the growth and recovery of the power sector. In a critical examination of this early phase of reforms D'Sa et al. (1999) and Dubash and Rajan (2001) highlight that the piecemeal approach to reforms failed to rein in the progressively languishing state of the power industry.\n4. Data variables: A sample data set of Indian power-generating utilities and T&D licensees for the period of 2001-2010 is created in the present work. The sample represents about 46 percent of total generation and about 60 percent of total electricity consumption in India during the period. The sample spans across 19 states and represents ownership under central government, state government and private investors. Variable definition, unit of measurement and respective sources of data are summarized in Table II.\n5. Methodology: Transformed fixed-effect stochastic frontier model\n6. Results and discussions: The empirical investigation of the present work is guided by two key objectives: first, to determine the distribution and nature of productivity change in the power sector, and second, the identification of sources of inefficiency. An attempt is made to fulfill these objectives by first, jointly estimating inefficiency and the exogenous determinants of inefficiency, and then the estimated TFP changes are decomposed into constituents of: technical change, efficiency change, scale effects and price effects, to understand the nature of change.\n7. Conclusion: The empirical findings obtained from the present study suggest that utility-level productivity in the Indian power sector has generally declined during the observed period of 2001-2010. Also, it is estimated that the state-level un-bundling of the electricity sector is not significantly associated with utility-level efficiency change. Furthermore, efficiency improvements attributable to increased competition are observed only in the case of smaller gas-based generating utilities. During the period after Electricity Act 2003, positive technology change is observed while simultaneously a decline in efficiency is observed for the coal-based generators. Improvement in efficiency over time is observed only for gas-based generators and integrated utilities, whereas T&D licensees show a decline in both technical change and efficiency.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to evaluate the utility-level productivity changes in Indian electricity sector during a period that witnessed structural reforms through several landmark regulatory changes.\nDesign/methodology/approach: A transformed fixed-effect stochastic frontier panel approach accounting for time-invariant unobserved heterogeneity is employed to evaluate the productivity changes, and the inefficiency level in 98 utilities spanning over the years 2001-2010. A flexible translog production model is modeled and estimated, and decomposition of productivity into components of changes in efficiency, scale technology and price effect is computed.\nFindings: The empirical findings obtained from the present study suggest that the utility-level productivity in Indian electricity sector has generally declined during the observed period of 2001-2010 specifically after the implementation of Electricity Act 2003. Also, it is estimated that the state-level un-bundling of the electricity sector is not significantly associated with utility-level efficiency change. Furthermore, efficiency improvements attributable to increased competition are observed only in the case of smaller gas-based generating utilities.\nOriginality/value: Earlier studies on the productivity evaluation of Indian electricity industry have applied the non-parametric data envelopment analysis approach, which has several limitations. The novelty of the paper lies in the fact that this paper is one of the first attempts that implement transformed fixed-effect stochastic frontier panel approach and thus disentangle unobserved heterogeneity from inefficiency. Furthermore, it is the only paper that analyzes 98 utilities (51 generating utilities, 38 transmission and distribution licensees and 9 vertically integrated utilities) in a single framework during the period 2001-2010.\n",
"role": "assistant"
}
] |
Open source geographical resources analysis support system (GRASS) for landslide hazard assessment
|
[
"Geographic information systems",
"Landslides",
"Open systems",
"Geology",
"Malaysia"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: With the advance of internet technology, open source software is becoming increasingly popular. There are several open source GIS in the internet such as mapServer, generic mapping tools (GMT), open source software image map (OSSIM) and GIS viewer to name, amongst others. The most popular and comprehensive is the geographical resources analysis support system (GRASS). GRASS is the raster/vector GIS software with more than 350 modules for analyzing, storing, displaying, and modeling data. As the open-source software, GRASS is continuously updated by multi-national teams consisting of many researchers. Internet is the backbone of the project where distribution, user support, and centralized management of the GRASS development are undertaken (Neteler and Mitasova, 2002). Whilst GRASS has limited capability in database and statistical analysis, it is linked to the many open source statistical software such as PostgreSQL and R stats (Figure 1).GRASS software operates under various UNIX compliant systems such as Linux, Sun Solaris and also under Microsoft-Windows NT/2000/XP (with Cygwin). For maximum effectiveness, a familiarity with UNIX environment is required (Neteler and Mitasova, 2002). For this study, GRASS was installed under Linux operating system which is also the open-source software. Presently, GIS application is gaining popularity and becoming a necessity in environmental studies. However, due to the high cost of commercial software, the use of GIS is limited, especially in developing countries such as Malaysia. The availability of free comprehensive GIS software such as GRASS will enable the financial costing for GIS software to be channeled for the purchasing of better hardware or data acquisition. Owing to its \"free\" nature, it is much anticipated that GRASS will be widely used by many organizations, however in reality, it is not. The nature of such free software with modifiable source code without direct technical support is probably the biggest obstacle to its potential users.However, there are indirect supports from user group and also archives of past queries and answers in the internet (http://op.gfz-potsdam.de/GRASS-List/). Full list of tutorials for GRASS can be found in the main GRASS homepage (www.geog.uni-hannover.de/grass/).Landslides often occur during or after heavy rainfall due to an increase in pore water pressures, which reduce the effective shear strength of the slope material, thus reducing their factor of safety. The stability of natural and cut slopes is governed by topographic, geologic and climatic variables which control shear stress and shear resistance in the slope. Combination of hilly areas, deeply weathered rock profile and high intensity tropical rainfall is a near-ideal condition for the landslide development in the highlands of Malaysia.Changes in slope morphology, which may occur as a result of natural or human influences, can induce failure. These changes may alter the drainage conditions of the surface and subsurface and/or induce \"over-steepening\" of the slope (Jones and Lee, 1994).This condition has been shown in the increasing trend of landslides occurrence in Malaysia as a result of intensive \"human influence\" or development especially in hilly area. Fatalities caused by landslides were rare in Malaysia before 1993. However, this has changed: in December 1993 the Highland Tower tragedy claimed the lives of 48 people where a massive landslide swept away the building. After that in 1995, landslides at Genting bypass had caused 14 deaths. In August 1996, 35 died when torrent of water and debris from a nearby hill swept away an orang asli settlement in Pos Dipang. In January 2002 landslides caused 17 fatalities in Sarawak. The latest was on 26 November 2003 when major landslides occurred with an estimated 100,000 m3 of rock and soil blocking the 21.8 kilometer North Klang Valley Expressway between Jalan Duta and Bukit Lanjan. These landslides occurrence shows that demarcating and mapping of potentially unstable area prior to, and after development are therefore, crucial.GIS has been recognized as useful tools in landslide studies as early as in 1990 by Dikau, where geological maps, landslide inventories, and landform morphometry information were utilized to produce hazard classification map (Dikau, 1990). Whilst landslides can be defined as the movement of mass of rock, debris or earth down a slope (Cruden, 1991), hazard can be defined as a likelihood of occurrence of an event in area within a specified period of time (Varnes, 1984).There are several different methods that have been proposed for evaluating landslide hazard assessment which mostly can be classified into two major classes such as rating (Sinclair, 1991; Anbalagan, 1992) and deterministic techniques (Cascini et al., 1991; Gokceoglu and Aksoy, 1996). The advantage of rating technique is that the data collection process is less intensive than the deterministic techniques, although the former has lower accuracy compared to the latter. The advantage of a well-constructed hazard rating is that it is user-friendly and can be understood by a non-specialist.Only one published study of landslide using GRASS has been undertaken by using deterministic technique to evaluate the landslide susceptibility in Handslovska Kotlina Basin in Slovakia (Paudits and Bednarik, 2002). It was concluded that GRASS GIS environment is an ideal tool, not only for its \"price performance ratio\" but also for its capability that is on par with other commercial software. However, this study did not mention anything on the status of the user-friendliness of the software.For that reason, a study in producing landslide hazard assessment was undertaken in Tanah Rata, Cameron Highlands, Northwestern corner of Pahang about 250 km by road from Kuala Lumpur, Malaysia (Figure 2). Although, there are already two previous studies undertaken in this area (Talib, 1997; KPU, 1999), however both studies were based on regional area and both were not utilizing GRASS as their GIS software.The main objectives of this paper were:1. to evaluate the application of GRASS for landslide hazard assessment; and2. to produce hazard assessment map for Cameron highlands.For this purpose, the simplified modified rating system (Anbalagan, 1992) was used.\nGeology: The geology of the study area consists of granite and alluvium (Figure 3). Based on field observations, the weathering profile of grades IV-VI can be up to 20 meters deep. Weathering profile of I shows the fresh rock, whereas grade VI shows the total soil (Table I). Borehole data showed that in some areas (BH2), the highly weathered soil can be up to 13 meters thick (Figure 3).\nData and method: The landslide hazard evaluation factor recognized six inherent causative factors for slope instability namely; lithology, structure, depth of soil cover, slope morphometry, relative relief, landuse or land cover, and hydrological conditions (Anbalagan, 1992). For the purpose of constructing simple hazard classification, only five factors were extracted. The five factors were: lithology, slope morphometry (slope angle), structure (lineament), land use and hydrological condition (water features). The modified rating for the landslide hazard scheme from Anbalagan (1992) is shown in Table II.The contributory factors are summed to produce an overall rating for the study area. The rating is classified into five zones (Table III). The flow chart for the hazard classification is shown in Figure 4.The data sources are:* Topographic map. The topographic map was supplied by the Department of survey and national mapping, Malaysia (JUPEM) with a scale of 1 : 10,000. The map was digitized on-screen in ArcView environment. The map was originally planned to be digitized in GRASS environment, but a problem in installing the driver inside the GRASS hindering the plan, probably it is a straightforward procedure for an expert but not for a new user. The data consist of contour and river, were then imported into the GRASS environment using two modules namely: v.in.arc and v.support. The digital elevation model (DEM) was then generated and automatically converted into raster format by using v.surf.rst. From the DEM, slope angle was derived using r.slope aspect. The rivers were converted from vector to raster using v.to.rast. For viewing in 3D, a command of Nviz was used. Nviz is a GRASS module which allows users to realistically render multiple surfaces in a 3D space, optionally using thematic coloring, draping GRASS vector files over the surfaces, and displaying GRASS site files either draped on the surfaces or as 3D point locations (GRASS, 2003).* Geological map. The lithological information was based on geological map published by the department of minerals and geoscience, Malaysia (Figure 3). The geological map is also digitized in ArcView environment and was imported into the GRASS environment using the procedure as in the importing of contour and river data. After that the geological boundaries and lineament were converted into the raster format using v.to.rast. From this map, lithological boundaries and trends of lineament are extracted.* Landcover map. Landcover map was digitized from the landuse map of 2000 published by the department of agriculture, Malaysia. The vector data are then imported into GRASS and converted into the raster format using the procedure as in the geological map.\nResults and discussion: It was noted that the installation of GRASS is straight forward where the software can be downloaded from the internet and by using shell script, the installation can be undertaken. However, it can be intimidating at the beginning due to \"Unix\" nature of Linux where most of the commands need to be typed in, contrasting to window environment which is based on just \"click\" the button, and \"ok\" window will pop up later, which is not what is in Linux. The first problem encountered with the modules when there was an \"error\" statement showed that Nviz unable to load properly due to problem with shared libraries (a type of file in Linux). After searching in the user group archives, the suggested solution was found. However, due to the unfriendliness environment of Linux, the authors only manage to solve the problem with the help of an expert in Linux.A problem was also encountered with the extraction of two parameters; lineament trends and geological boundaries from the geological map. Originally, the topographical map and river map were digitized from topological map and the lineament trend and geological boundaries from geological map. It was noted that the lineament trend and geological boundaries did not match when overlaid onto the slope angle and river data. The most likely explanation of this problem is the small scale of the original geological map and also distortion factor of the geological map that had been subjected to photocopying processes. This problem was overcome by tracing the lineament and geological information onto the topographical map. The map then had to be scanned and digitized again.The results showed that about 93 percent of the study area falls under zone II that is of low hazard, with another 15 percent on zone III with moderate hazard and only less than 1 percent falls under zone IV, which is of high hazard (Table IV and Figure 5). Majority of the area in zone III (approximately 90 percent) is in close proximity to the river (Figure 5). This is not surprising because the orientation of the river is used to interpret the structural geology (i.e. lineament) of the area (Figure 2). Therefore, the presence of river and lineament which overlapped with each others relatively gave rise to high summed value of ranking for both categories. Only 0.2 percent of the study area is categorized in zone IV of high hazard. These areas can be seen at the northwest of the study area (Figure 5). Whilst the main drawback of the rating technique is its subjectivity characterizing the criteria which the ratings are assigned to the various factors taken into consideration (Van Westen, 1993; Donati and Turrini, 2002), the advantage is it is relatively easy to construct and understand, especially for the non-specialist. Other techniques such as deterministic hazard analysis is considered to be more accurate, but intensive data collections are needed, which takes longer time to produce and financially more expensive (Hammond et al., 1992; Guzzetti, 1993; Gokceoglu and Aksoy, 1996).It is important to emphasize that the primary aim of this paper is to evaluate the potential application of GRASS to landslide studies, thus for such purpose, the five factors that were used are deemed appropriate. The results show that GRASS GIS is capable of performing environmental analysis, such as, landslide hazard assessment as in this study. The reliability of the results depends on the data applied and not on the tool as in this case, GRASS. The accuracy of the map may be assessed by cross-correlation with landslide occurrence in the field, which is the subject for further studies.Limited knowledge in UNIX environment and the command base GRASS instructions that need to be remembered and typed in, posed a greatest challenge for this study. This \"not so user-friendly environment\" will certainly discourage potential users. Although GRASS provides a menu-based command in Tcl/Tk (window based) environment, its application is not extensive and constantly affected by errors during command execution. This problem is, however, not encountered in the UNIX environment. Extensive functionality either in GIS or image processing for remote sensing is the main strength of GRASS. GRASS is certainly not the typical stereotype of open source software where it is strained by bugs and limited functionality.\nConclusions: The major outcome of this research is the possible use of open source GIS software in the application of landslide hazard assessment. The use of GRASS permits a reliable solution for environmental analysis for budget-constraint researchers or organizations.Although further research is required, the use of GRASS could produce a significant cost saving, where financial costing can be directed toward acquiring better hardware or for more reliable data collection, which will increase the reliability and accuracy of the environmental analysis undertaken. However, it certainly will incur initial cost for training in an unfriendly UNIX compliance system such as Linux, which not many people are familiar with.\n",
"role": "user"
},
{
"content": "Purpose: - The primary aim of this research is to investigate the application of open source geographic information system software, geographical resources analysis support system (GRASS) for landslide hazard assessment.\nDesign/methodology/approach: - Five parameters affecting landslide occurrence derived from topographical, geological and land use maps of Cameron highland were used for the assessment.\nFindings: - The results showed that about 93 percent of the study area falls under zone II that is of low hazard, with less than 7 percent on zone III with moderate hazard and only less than 1 percent falls under zone IV, which is of high hazard.\nResearch limitations/implications: - The accuracy of the landslide hazard map needs to be assessed by cross-correlation with landslide occurrence in the field.\nPractical implications: - The map produced showed the potential application of GRASS as a tool for producing landslide hazard assessment map.\nOriginality/value: - The major outcome of this research is the possible use of open source GIS software in the application of landslide hazard assessment. The capability of GRASS in performing such environmental assessment will certainly attract many researchers and organizations with limited budgets, especially in developing countries such as Malaysia.\n",
"role": "assistant"
}
] |
Effects of knowledge management on unit performance: examining the moderating role of tacitness and learnability
|
[
"Knowledge management systems",
"Tacit knowledge",
"Knowledge management",
"Knowledge-in-practice",
"Learnability"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In recent years, knowledge management (KM) has become a cornerstone principal of management. The explosion of popularity has led to many academic studies showing how the deliberate management and maintenance of knowledge resources can help to deliver added value to organizations (Chuang et al., 2016; Forte et al., 2016; Bharati et al., 2015). Despite the proposed benefits of KM, mounting research has also examined the barriers to KM (David and Fahey, 2000), disagreement around the impact of KM on various organizational performance outcomes (Garud and Kumaraswamy, 2005; Booker et al., 2008) and that many KM activities fail to deliver results (Bell DeTienne et al., 2004). In light of these discrepant findings, scholars have called for a finer grained understanding of the conditions, contingencies and other situational factors that can explain variability in the relationship between KM and performance (Heisig et al., 2016). Indeed, this research approach reflects a broader perspective within the literature that suggests that KM should be matched to environmental factors (Sabherwal and Sabherwal, 2007; Reus et al., 2009) and linked to a firm's current and future knowledge requirements (McCann and Buckner, 2004).\n2. What is knowledge management?: The field of KM draws on a wide variety of fields including philosophy, management theory, cognitive science, social science, management science, information science and more recently artificial intelligence (Pandey and Dutta, 2013; Kakabadse et al., 2003). This has resulted in a wide range of definitions, frameworks and taxonomies. KM has become something of an \"umbrella construct\" (Hirsch and Levin, 1999) used to cover a spectrum of activities ranging from organizational learning, to database management tools, to broad KM systems, to social learning and comminutes of practice and many aspects in between. This evolution has also resulted in a range of definitions for KM. Actually \"despite the popularity of KM, there is no one simple definition available\" (Yahyapour et al., 2015, p. 1296). For clarity, this paper defines KM as \"the planned and ongoing management of activities and processes for leveraging knowledge to enhance competitiveness through better use and creation of individual and collective knowledge resources\" (Heisig et al., 2016; CEN, 2004). The key point of this definition is that it suggests KM is a set of both planned and ongoing activities for leveraging knowledge to enhance competitiveness.\n3. Methods: This study was conducted in seven firms in a single industry to control for between industry differences. The health-care industry was selected due to the wide variation in the use of different KM activities and the variation in the tacitness and learnability of the knowledge underlying work. In addition, organizational structures and established subordinate-supervisor roles permit effective data collection at the unit level. This unit-level data include knowledge context, adoption and deployment of KM activities and performance variables.\n4. Results: Table II presents the means, standard deviations and correlations for measures used in the study. To examine the issue of multicollinearity across variables, variance inflation factors (VIFs) were calculated. The maximum VIF within the models was 2.64, which is well below the suggested cut-off of 10 (Neter et al., 1990).\n5. Conclusion: This study contributes to growing interest in the role that situational issues or contextual issues play on the relation between KM and performance outcomes (Heisig, 2009; Heisig et al., 2016). Building upon recent KIP theory (McIver et al., 2013), we tested how the relation between KM activity sets and unit performance might depend on the tacitness and learnability of the knowledge involved in a unit's work. Our findings generally replicate previous findings that KM activities are directly associated with higher performance. To our surprise, and counter to predictions from KIP theory - we found no support that the relation between KM and performance varies depending on the tacitness and learnability of knowledge involved in work. This finding is significant because it challenges KIP theory that suggests KM activities should vary in effectiveness depending on aspects of the context that they are adopted in (Heisig, 2009; McIver et al., 2013; Reus et al., 2009). We now turn to briefly discuss the implications of these findings for theory and practice.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to examine and test the moderating influence of the type of knowledge underlying work - known as the knowledge in practice (KIP) perspective - on the relationship between knowledge management (KM) activities and unit performance. KIP proposes that the knowledge underlying work varies according to two dimensions: tacitness and learnability. This theory proposes that aligning KM activities with tacitness and learnability results in increased performance. However, to the authors' knowledge, there exists no direct empirical tests of these propositions outlined in KIP theory. This study examines the empirical support for the theoretical predictions outlined by KIP.\nDesign/methodology/approach: The study uses a multiple survey, multiple respondent survey design to measure KM activity sets, the tacitness and learnability involved in work contexts and unit performance. Regression analysis is used to test the hypotheses.\nFindings: In line with previous research, the authors find support for a direct relationship between some KM activity sets and unit performance. Surprisingly, the authors did not find support for the predictions offered by KIP theory. Specifically, the degree of tacitness or learnability did not moderate the relationship between KM activity sets and unit performance.\nResearch limitations/implications: The lack of findings to support the moderating effects of tacitness and learnability on the relationship between KM activity sets and unit performance challenges the adequacy of existing formulations of KIP theory. The authors discuss several important future research directions to examine this puzzling finding.\nPractical implications: This paper reinforces the suggestion that managers at all levels of organizations should engage in KM activities to increase performance. These findings also suggest that considering the type of knowledge underlying a unit's work should not be a consideration in implementing KM activities.\nOriginality/value: This is the first study to empirically test a KIP perspective. That is, how the type of knowledge involved in work moderates the relationships between KM activity sets and unit performance.\n",
"role": "assistant"
}
] |
Sky's new starters get up to speed with e-learning: Continuous innovation is the key to success
|
[
"Computer‐based learning",
"Employee induction",
"Innovation",
"Awards",
"Cost reduction",
"Employee turnover"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Engaging a new member of staff is a delicate process. It involves the careful management of first impressions, as these are often difficult to shake once set in the mind of the new employee.\nImpact of the induction training: The on-boarding portal met and exceeded objectives in the following ways:* sales conversions for new starters increased in week one;* some 37.5 hours were saved in training each recruit, which amounts to a potential saving of 75,000 hours a year;* advisors were classed as competent at least one week earlier than before, despite a 25 percent reduction in face-to-face training time; and* around PS700,000 a year has been saved in induction costs.\"Our pre-learning portal is an innovative way for us to engage with new Sky people, giving them a head start settling in,\" said Dave Rumble, Sky sales director. \"I am really pleased that we can now provide an effective solution that gives just the right balance of useful e-learning, a company overview and fun activities.\"\"I was impressed by the information the delegates had retained,\" commented Lorraine Jordan, operational trainer, sales. \"The new induction training allows trainers to invest more time in covering products and services in depth. This means I can spend more time on how the features of our products can benefit customers.\"\"If you have anything to do with the design of this induction, you have done a superb job,\" enthused new sales advisor Georgie Wilkie.\"In an organization that successfully delivers more than 100,000 hours of e-learning a year, Get Up to Speed sets a new standard in highly engaging and effective e-learning that captures the spirit of Believe in Better,\" explained Kenneth Henderson, Sky's head of talent-development operations.\nTraining collaboration wins award: The five-year collaboration between Sky and Brightwave on Get Up to Speed and the Sky Development Studio, a learning-management system built in-house, won the corporate e-learning industry award for outstanding achievement.The two programs have together significantly changed learning and performance across Sky's 17,000 workforce. Sky calculates that the implementation of these programs has delivered annual savings of more than PS2.7 million. On top of this, Sky has increased satisfaction among its 10.3 million customers, while the savings have helped to finance new services like Sky Atlantic.Key to the e-learning strategy's success has been continuous innovation - the result of the Sky and Brightwave teams listening to learners and adapting to the needs of the business.The partners also combined to produce a course that communicates the key features of the new Sky Go product, which won the app of the year accolade in the Stuff Gadget Awards.Sky revamped its Mobile TV service last summer and its new iOS app, Sky Go, has already racked up a million downloads. This slick software for iPhone and iPad lets Sky customers stream, pause and rewind live TV from various Sky packages and channels.The successful launch of this app demonstrates the important role e-learning has played in the product-launch process. Brightwave designed the e-learning course to inform and enthuse Sky sales people and engineers about Sky Go, ensuring that it was then sold effectively and enthusiastically to Sky customers. The process demonstrates how integrating e-learning into the new-product process can achieve excellent results for organizations.\nCollaboration pays dividends: The e-learning industry award judges commented: \"From time to time, this award is presented for an outstanding whole body of work which is making a difference across all facets of an organization. The collaboration between Sky and Brightwave has more than successfully reshaped e-learning from top to bottom.\"Kenneth Henderson explained: \"The blend of an internal-developer team combined with our strategic partnership with Brightwave has really paid dividends and adds to the expertise that we have internally, demonstrating thought-leadership and technical experience across a range of initiatives, which help us to deliver tangible business results.\"Charles Gould, managing director at Brightwave, said: \"Our success at this year's e-learning awards recognizes the exceptional collaboration between Brightwave and Sky which has developed into a true strategic partnership. Our commitment to helping our clients to achieve a positive, measurable impact has been embraced by the talented Sky team who have more than matched our dedication to achieving this goal.\"\n",
"role": "user"
},
{
"content": "Purpose: - Describes an award-winning induction program for new sales advisors at satellite-broadcasting company Sky.\nDesign/methodology/approach: - Explains the reasons for the program, the form it takes and the results it has achieved.\nFindings: - Reveals that the Get Up to Speed e-learning program has: helped to ensure that new starters are knowledgeable about Sky's products and services; accelerated the time it takes for advisors to become fully competent; helped to reduce employee turnover; and improved customer-service and sales skills.\nPractical implications: - Details how the on-boarding portal has met and exceeded objectives in the following ways: sales conversions for new starters increased in week one; some 37.5 hours were saved in training each recruit, which amounts to a potential saving of 75,000 hours a year; advisors were classed as competent at least one week earlier than before, despite a 25 percent reduction in face-to-face training time; and around PS700,000 a year has been saved in induction costs.\nSocial implications: - Details how employee turnover can be reduced in industry sectors where \"churn\" is a significant problem.\nOriginality/value: - Explains the importance of getting the induction process right in order to create an environment where new hires are motivated and really care about their work from the start.\n",
"role": "assistant"
}
] |
An optimal sequential information acquisition model subject to a heuristic assimilation constraint
|
[
"Competitive strategy",
"Decision support systems",
"Rationality",
"Sequential information acquisition",
"Utility theory"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The current paper defines the optimal sequential information acquisition process of a rational utility maximizing decision maker (DM) when allowed to acquire n pieces of information from a set of bi-dimensional products whose characteristics have a continuum of variants.\n2. Literature review: The sequential information acquisition process of DMs has been analyzed from very different perspectives by quite a heterogeneous group of literatures, including, but not limited to, economics, operations research, and information sciences.\n3. Basic notations and main assumptions: The notations and initial assumptions we refer to when constructing our model are those of Di Caprio and Santos Arteaga (2009). However, for the sake of completeness, this section partially reproduces some formal definitions and related comments already described in Section 2 (Preliminaries and basic notations) and Section 3 (Main assumptions) of Di Caprio and Santos Arteaga (2009).\n4. The two-observation setting: expected search utilities: The set of all products, G, is identified with a compact and convex sub set of the two-dimensional real space R2. In the simplest non-trivial scenario, G consists of at least two products and the DM is allowed to collect two pieces of information, not necessarily from the same product. The DM must use the first observation to check the first characteristic of any of the products in G. After checking the value of the first characteristic from an initial product, the DM must decide whether to keep acquiring information on the same product and, hence, check the value of its second characteristic, or start collecting information on a new product.\n5. Further observations: 5.1. The three-observation setting: expected search utilities\n6. Toward the n observations case: 6.1. Four observations\n7. Evolution and limitations: The information acquisition process described through the paper is only partially complete. That is, when more than two observations are acquired, subsequent recalculations are required on the side of the DM. Consider, for example, the three observations setting. In this case, starting is the optimal behavior of the DM after observing x\n8. Conclusions and managerial significance: The information acquisition behavior modeled in this paper corresponds to that of a perfectly rational DM, i.e. endowed with complete and transitive preferences, whose objective is to choose optimally among the products available subject to a heuristic assimilation constraint. The current paper opens the way for additional research on heuristic information acquisition and choice processes when considered from a satisficing perspective that accounts for cognitive limits in the information processing capacities of DMs. In this regard, the reference threshold value defining the heuristic mechanism and the resulting information acquisition process of DMs may be interpreted as the level of satisficing required by a DM when designing his information acquisition strategy. This heuristic mechanism has been imposed to simplify a process that would otherwise require an increasingly larger calculation and information processing capacity on the side of the DM as the number of observations that may be acquired or the number of characteristics defining the products increases. Despite the heuristic mechanism imposed, a risk neutral DM has been shown to exhibit an uncertain and almost completely random information acquisition behavior when four observations are allowed to be gathered. This result illustrates the difficulties that arise when trying to forecast the behavior of rational DMs even in the current heuristic setting with a relatively small number of observations being acquired.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to study the optimal sequential information acquisition process of a rational decision maker (DM) when allowed to acquire n pieces of information from a set of bi-dimensional products whose characteristics vary in a continuum set.\nDesign/methodology/approach: The authors incorporate a heuristic mechanism that makes the n-observation scenario faced by a DM tractable. This heuristic allows the DM to assimilate substantial amounts of information and define an acquisition strategy within a coherent analytical framework. Numerical simulations are introduced to illustrate the main results obtained.\nFindings: The information acquisition behavior modeled in this paper corresponds to that of a perfectly rational DM, i.e. endowed with complete and transitive preferences, whose objective is to choose optimally among the products available subject to a heuristic assimilation constraint. The current paper opens the way for additional research on heuristic information acquisition and choice processes when considered from a satisficing perspective that accounts for cognitive limits in the information processing capacities of DMs.\nOriginality/value: The proposed information acquisition algorithm does not allow for the use of standard dynamic programming techniques. That is, after each observation is gathered, a rational DM must modify his information acquisition strategy and recalculate his or her expected payoffs in terms of the observations already acquired and the information still to be gathered.\n",
"role": "assistant"
}
] |
Web page prediction from metasearch results
|
[
"Search engines",
"Individual behaviour",
"Worldwide web",
"Information retrieval"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Internet users generally hope that search engines can locate the exact required information. The primary aim of search engine design lies in finding data relevant to user queries. This task is generally recognized as difficult (Jansen et al., 1998) because the inputs supplied by the users are generally insufficient for collecting suitable data. A search of previous literature showed that a user on an average enters 2.3 terms for a given query (Spink et al., 2001). Such a query generally produces thousands of results, so every search engine ranks the results by ranking algorithms (Li, 1998). Often the users are only concerned with the top ranking results, and ignore the rest (Sougne, 2000).Search engines generally use information retrieval (IR) techniques to find web pages or documents relevant to a user query in a database of web pages. Traditional IR methods measure the similarity between user query and document contents by models including vector space models, probability models, and fuzzy logic models (Yates and Neto, 1999). These methods are primarily based on the keyword matches and their frequency in documents, and therefore easily result in \"keyword spamming\". That is, a document's rank can be manipulated by duplicating the same set of keywords several times in a document (Yates and Neto, 1999). However, search engines can now detect and penalize keyword spamming (Mall-Net, 2001).To solve the keyword spamming problem, Li (1998) developed the HVV method. HVV uses a ranking scheme similar to that of science citation index (SCI), which ranks each web page by the number of times other web pages contain links to it. Thus, a web page would have a high weight if several hyperlinks point to it. However, this method does not distinguish between the ratings given by high-quality web pages from those given by low-quality web pages. Henzinger (2001) discussed the quality of hyperlinks in a PageRank algorithm, which is recursively applied to an arbitrary set of initial web pages and therefore needs considerable computing power to analyze the hyperlinks and page ranks (Sobek, 2002). As the web continues to grow at millions of pages per day (Chakrabarti et al., 1999), many source web pages need to be collected, and many experiments need to be performed to demonstrate the effectiveness of this method.Collecting search results from several search engines is an effective way to collect and rate relevant web pages (Selberg and Etzioni, 1997). This method is called metasearch (Dreilinger and Howe, 1996; Selberg and Etzioni, 1997). The metasearch approach saves a lot of time by searching only in one place and eliminating the need to use and learn several separate search engines. The quality of metasearch results depends on the search engines used and how they organize the results.In this study, a metasearch method, called search engine vector voting (SVV), was developed to rate the search results obtained from several well-known search engines. The relevance of a web page to a given query is determined by its frequency and rankings in the search results returned by each search engine. A web page wins a vote from a search engine if it is listed on the top 50 items returned by a given query. The SVV method can solve the two problems encountered in using HVV stated above, since well-known search engines are generally recognized as high-quality web sites, and SVV only considers the top 50 matched web pages from each search engine. More importantly, the adoption of SVV would eliminate possible bias from any single search engine.Web search engines generally respond to user queries with web page URLs and short descriptions. Users may browse through the URLs to reach the pages most relevant to their queries. Therefore, a search engine can significantly save users' time by directly providing the web pages and their embedded links most relevant to user queries. Nick and Themis (2001) implemented an intelligent agent system that uses a genetic algorithm to recommend web pages directly to users. To help the intelligent agent in learning a user's interests, the user is requested to provide some web pages examples of interest in advance. This task is very hard for novice users, who may give inappropriate examples resulting in unacceptable outcomes. Thus, a better approach is required to solve the example input problem.A hyperlink prediction (HLP) method was then developed to recommend web pages referred from the URLs listed in SVV results to the users. HLP is based on the assumption that the users would find more interesting pages by following many successive hyperlinks on high-ranking web pages in SVV results.The rest of this study is organized as follows. Section 2 introduces some related work. Section 3 discusses SVV and HLP in detail. Section 4 presents three experiments using the proposed search methods. The first experiment was a user study on the quality of search results for some hot keywords. The second experiment compared SVV with the four best metasearch search engines. The third experiment compared SVV and HLP with six well-known search engines. Finally, Section 5 concludes this study and discusses future research.\n2. Related work: This section reviews some important literature related to this study. First, we provide a literature review on metasearch. Second, we discuss some related work on hyperlink exploration.2.1 Metasearch\n3. Search methods: This section describes the proposed search methods in detail. First, the underlying user behavior function is presented. Then, the formulation and implementation of SVV and HLP are described.3.1 User behavior function\n4. Experiments: Three performance tests were performed on the proposed search methods. First, 23 volunteers were recruited from local bulletin boards to judge the quality of search results for some hot keywords. Second, the SVV metasearch engine was compared with the four best metasearch engines (Sherman, 2002a, b). Third, the SVV and HLP were compared with six well-known search engines.4.1 User study\n5. Conclusions and future work: This study developed SVV, a metasearch and ranking method and HLP, a post-search and ranking method. These methods working together can recommend a user's next hyperlink access based on metasearch results. The proposed design is based on the primacy effect of browsing behavior, that users favor top ranking items in search results. This approach is intuitive and needs no training data at all. A user study showed that users are more satisfied with the proposed search methods than general search engines on hot keywords. Moreover, two performance measures confirmed that the proposed search methods out-perform other metasearch and search engines.Future work is planned along several directions. First, the search methods implemented are primarily based on the keyword match between the contents of web pages and the user query items. Using the semantic web in recommending concepts or items relevant to the user query might help finding the exact contents required by users. This approach is particularly helpful when the users lack sufficient knowledge about the domains in which they are searching. Second, a mechanism that groups search results for improving the way search results are segmented and displayed also helps users locate the required information. Finally, further user feedback is needed to fine-tune the search parameters, including a and b, to improve the performance of the proposed methods.\n",
"role": "user"
},
{
"content": "Purpose: - This study aims to present a new web page recommendation system that can help users to reduce navigational time on the internet.\nDesign/methodology/approach: - The proposed design is based on the primacy effect of browsing behavior, that users prefer top ranking items in search results. This approach is intuitive and requires no training data at all.\nFindings: - A user study showed that users are more satisfied with the proposed search methods than with general search engines using hot keywords. Moreover, two performance measures confirmed that the proposed search methods out-perform other metasearch and search engines.\nResearch limitations/implications: - The research has limitations and future work is planned along several directions. First, the search methods implemented are primarily based on the keyword match between the contents of web pages and the user query items. Using the semantic web to recommend concepts and items relevant to the user query might be very helpful in finding the exact contents that users want, particularly when the users do not have enough knowledge about the domains in which they are searching. Second, offering a mechanism that groups search results to improve the way search results are segmented and displayed also assists users to locate the contents they need. Finally, more user feedback is needed to fine-tune the search parameters including a and b to improve the performance.\nPractical implications: - The proposed model can be used to improve the search performance of any search engine.\nOriginality/value: - First, compared with the democratic voting procedure used by metasearch engines, search engine vector voting (SVV) enables a specific combination of search parameters, denoted as a and b, to be applied to a voted search engine, so that users can either narrow or expand their search results to meet their search preferences. Second, unlike page quality analysis, the hyperlink prediction (HLP) determines qualified pages by simply measuring their user behavior function (UBF) values, and thus takes less computing power. Finally, the advantages of HLP over statistical analysis are that it does not need training data, and it can target both multi-site and site-specific analysis.\n",
"role": "assistant"
}
] |
Examining users' knowledge sharing behaviour in online health communities
|
[
"Trust",
"Social support",
"Knowledge sharing",
"Online health communities",
"Community quality",
"Privacy risk"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Online health communities (OHC) provide a platform for users to acquire health information and share their knowledge. Due to its advantages, such as convenience and efficiency, OHC have received wide adoption among users. CNNIC's (2017) report indicated that about 26.6 per cent of internet users (195m) have ever used online health services. In China, a few reputable OHC such as Haodf and Dxy have achieved great success. Users can discuss with other members about a common topic and exchange their experience and knowledge. They may also express social support to other members, which may help alleviate the anxiety and improve moods (Liu et al., 2018). The user-generated content provides great value for OHC providers. However, previous research has found that users have typically acquired information but contributed little on the OHC (Yan et al., 2016; Zhang et al., 2017). This may impede the sustainable development of OHC and eventually lead to the failure. Thus, service providers need to understand the factors affecting OHC user behaviour and take effective measures to facilitate users' knowledge sharing behaviour.\n2. Literature review: 2.1 The socio-technical perspective\n3. Research model and hypotheses: In this research, we draw on the SOR framework as the theoretical base to develop the research model and hypotheses. The SOR suggests that environmental stimulus affects a user's internal state, which in turn leads to his or her behavioural response (Mehrabian and Russell, 1974). The SOR framework provides a useful lens to uncover users' internal decision process underlying their behaviour. It has been applied to examine consumers' participation in co-creation (Zhang et al., 2015), the flow experience (Liu et al., 2016) and social shopping (Hu et al., 2016; Fu et al., 2018). This research generalizes it to the OHC context. We propose that the stimulus includes both community quality and social support, which reflect the technical and social factor, respectively. The organism includes trust and privacy risk. The response is sharing intention and behaviour.\n4. Method: The research model includes ten factors. Except knowledge sharing behaviour, other factors were measured with multiple items. All items were adapted from extant literature to improve the content validity. These items were first translated into Chinese by a researcher. Then another researcher translated them back into English to ensure consistency. When the instrument was developed, it was tested among ten users that had OHC usage experience. Then according to their comments, we revised a few items to improve the clarity and understandability. The final items and their sources are listed in the Appendix. Each item was measured with a five-point Likert scale.\n5. Results: Data analysis includes two steps. First, we examined the measurement model to test reliability and validity. Then we examined the structural model to test research hypotheses.\n6. Discussions: As shown in Figure 2, except H1 and H7, other hypotheses were supported. Among three factors of community quality, both information quality and service quality have significant effects on trust in community. This suggests that users assess both information quality and service quality to establish their trust in an OHC. As users access OHC for health information, they are much concerned with the information accuracy and sufficiency. They also expect to obtain reliable and personalized services. Service providers can offer the personalized information and services to users based on their preferences and browsing history. We did not find the effect of system quality on trust in community. This may be for two reasons. First, system quality reflects a non-humanized technical aspect, whereas both information quality and service quality reflect the humanized technical aspects. Users mainly rely on the humanized aspects to form their evaluation of the community trustworthiness. Second, users access OHC to obtain health information and services. This enables them to be mainly concerned with information quality and service quality rather than system quality to develop their trust in a community.\n7. Theoretical and managerial implications: From a theoretical perspective, this research adopted the SOR framework to examine OHC users' knowledge sharing behaviour. As noted earlier, although previous studies have identified a few determinants of OHC user behaviour such as trust and privacy concern, they have seldom examined the internal decision process of users' knowledge sharing. In other words, the underlying mechanism of how external factors affect a user's internal decision remains a question. Our results indicated that both social factor (social support) and technical factor (community quality) affect a user's trust, which in turn determines his or her sharing behaviour. The results disclose OHC users' behavioural process and advance our understanding of OHC users' knowledge sharing. Second, we examined the effects of both social and technical factors on OHC user behaviour by employing a socio-technical perspective. The results provide a comprehensive understanding of the determinants of OHC user behaviour. This extends previous research, which has been mainly concerned with a single perspective. Third, compared to trust in community (system trust), trust in other members (social trust) has a stronger effect on the sharing intention. This highlights the important role of social trust in facilitating OHC users' knowledge sharing. The results also enrich extant research, which has seldom tested the concurrent effects of both system trust and social trust on OHC user behaviour. Fourth, besides both types of trust, privacy risk also negatively affects knowledge sharing intention. The results provide a complete understanding of OHC user behaviour by integrating both enablers (trust) and inhibitors (privacy risk) of knowledge sharing.\n8. Conclusion: As a popular application, OHC have been widely adopted by users. However, most users have acquired information but seldom shared their knowledge on the community. This research examined the decision process underlying users' knowledge sharing behaviour on OHC. The results suggest that both social support and community quality play significant roles in facilitating users' sharing health knowledge.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to draw on the stimulus-organism-response (SOR) framework to examine users' knowledge sharing in online health communities (OHC).\nDesign/methodology/approach: Based on the 326 valid responses collected from a survey, structural equation modelling was employed to examine the research model.\nFindings: The results indicated that both information quality and service quality affect trust in community, whereas both informational support and emotional support affect trust in other members. Both types of trust and privacy risk determine users' sharing intention, which in turn affects sharing behaviour.\nResearch limitations/implications: The results imply that service providers need to improve their community platforms and create a supportive climate in order to facilitate users' trust and their knowledge sharing behaviour.\nOriginality/value: Previous studies have examined a few determinants of OHC user behaviour such as privacy concern, trust and motivations. However, they have seldom disclosed the internal decision process underlying users' knowledge sharing. This research tries to fill the gap.\n",
"role": "assistant"
}
] |
A theoretical model of transformational leadership's role in diverse teams
|
[
"Knowledge creation",
"Transformational leadership",
"Modelling"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: According to the knowledge-based view of the firm, an organization's ability to create knowledge is a source of competitive advantage (Conner and Prahalad, 1996; Grant, 1996; Kogut and Zander, 1992; Spender, 1996; Tsoukas, 1996). Unlike knowledge, which is often rare for a limited period (Peteraf and Barney, 2003; Zollo and Winter, 2002), as organisations develop and implement routines to create new knowledge their expertise becomes increasingly rare, socially complex and causally ambiguous (Coff, 2003; Lado and Zhang, 1998; Lippman and Rumelt, 1982). In view of the potential benefits from developing knowledge creating capabilities, the numerous failed attempts by organizations to do so (Dachler, 1992), and the relatively limited research published (Hargadon, 2002; Un and Cuervo-Cazurra, 2004) knowledge creation continues to be an area of research focus that warrants investigation (Corti and Storto, 2000; Liedtka, 1999; Zollo and Winter, 2002). This paper responds to this research need by theoretically investigating the role of leadership in facilitating the creation process in teams of diverse composition. Our focus is on the impact of composition on knowledge creation, in recognition of the many studies and knowledge creation models that have included a focus on diverse composition (Drach-Zahavy and Somech, 2001; Fong et al., 2007; Gibson, 2001; McFadyen and Albert, 2004; Nonaka et al., 2000; Parent and Gallupe, 2000; Un and Cuervo-Cazurra, 2004).\nModel, conceptual background and propositions: Figure 1 shows the model in which we present two proposed relationships explaining the impact of diversity on knowledge creation. The first pathway depicts affective conflict, defined as conflict that engenders negative emotion (Jehn, 1995), as an affective mediator linking diversity and knowledge creation, the second pathway depicts cognitive heterogeneity, defined as the extent to which the team reflects differences in knowledge, including beliefs, preferences and perspectives (Miller et al., 1998), as a cognitive mediating mechanism. For each of these pathways, we argue the role of transformational leadership as a moderator of the mediated relationship.In the following sections, we provide the rationale underlying our model development, and develop theoretical arguments supporting the proposed relationships. We discuss how diversity relates to the mediators - affective conflict and cognitive heterogeneity, and present our reasoning to justify how these two mediators exert their influence on knowledge creation. We further present our argument to demonstrate how transformational leadership moderates each of these proposed relationships.Diversity is a characteristic of groups of two or more people and typically refers to differences in demographic characteristics such as age, race or gender (Ely and Thomas, 2001; McGrath et al., 2000). This study investigates functional diversity, defined as the extent to which a team is composed of members with different functional backgrounds (Pelled et al., 1999; Shaw and Barrett-Power, 1998), which is a defining characteristic of cross-functional teams (Pelled et al., 1999). Cross-functional teams, defined as teams comprised of members from different functional areas (Pelled et al., 1999), are increasingly used in organisations in central business tasks requiring the solution of complex problems, particularly those requiring knowledge creation (van der Vegt and Bunderson, 2005; West and Anderson, 1996).Comprehensive reviews of diversity research have identified two main theoretical perspectives regarding its relationship to performance (van Knippenberg et al., 2004; Williams and O'Reilly, 1998). The first is termed the information/decision-making perspective, which holds that diverse groups are more likely to possess a broader range of task-relevant knowledge than their homogeneous counterparts. The advantages bestowed by diverse composition are attributable to the breadth and depth of knowledge accrued, which allow the development of more comprehensive and more innovative solutions to complex problems consequent to the application of different perspectives and tacit knowledge (Ancona and Caldwell, 1992; DeDreu and West, 2001). The second pathway through which diversity impacts performance is reflected in the social categorization perspective which holds that similarities and differences between ourselves and others provide the basis for ingroup or outgroup categorisation. Building on the similarity/attraction paradigm, ingroup members are trusted and favoured, while relations with outgroup members are characterised by information withholding and conflict (Tajfel, 1970; Williams and O'Reilly, 1998).Throughout the management literature, these two pathways have been used to predict opposing outcomes for diverse teams, the former providing an explanation for innovation in complex, non-routine tasks due to the integration of broad-ranging and previously disconnected knowledge and the latter providing an explanation for group failure due to conflict and ensuing barriers to collaboration and knowledge sharing. The present study differentiates itself from previous research by explicitly investigating the role of transformational leadership as a factor capable of influencing the outcome of both of these pathways and subsequently the performance of diverse teams tasked with knowledge creation.Transformational leadership has been described as guidance through four leadership components: idealised influence, inspirational motivation, intellectual stimulation and individualised consideration (Bass, 1985; Bass et al., 2003). Idealised influence is the degree to which leaders utilise in charisma to facilitate followers' emotional attachment and psychological identification (Bass, 1985). Inspirational motivation is the degree to which leaders articulate compelling vision (Bass, 1985). Intellectual stimulation is the degree to which leaders challenge existing assumptions, stimulate new ideas and encourage high-level reasoning rationality and evidence (Bass, 1985). Individualised consideration is the degree to which leaders provides personal attention and encouragement to followers (Bass, 1985). Transformational leadership is argued to effect performance directly, and indirectly through its effects on subordinates satisfaction with their leadership, their affective commitment (Barling et al., 1996; Howell and Avoilio, 1993).This paper proposes two alternative propositions relating to the mechanisms through which leadership impacts the performance of diverse teams. The first section focuses on the information/decision-making perspective and depicts the role of leadership in facilitating constructive cognitive effects on group performance. The second section focuses on the social categorisation perspective and depicts the role of leadership in mitigating against destructive emotional barriers to group effectiveness. The first set of propositions argue that diversity is linked to knowledge creation through the mediator of cognitive heterogeneity, and that transformation leadership moderates this relationship by contributing to the circumstances under which cognitive heterogeneity emerges, and is effectively utilised towards knowledge creation, in diverse teams. The second propose that diversity is related to affective conflict, which in turn constrains team knowledge creation. Transformational leadership moderates this relationship by mitigating against the emergence of affective conflict, and by overcoming the impact of affective conflict in knowledge creation efforts. The arguments developed to support these propositions follow this reasoning.Information/decision-making perspective\nDiscussion: Implications for theory and research\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to develop a theoretical model of leadership and knowledge creation by drawing on two contrasting diversity perspectives. The model argues a moderating role for leadership in explaining the influence of diverse composition on team knowledge creation.\nDesign/methodology/approach: - A model of leadership's impact on knowledge creation is developed based on an interdisciplinary review of literature spanning the creativity, innovation and learning literature, diversity management, top management team demography and upper echelons literature and learning from transformational leadership research.\nFindings: - The main contribution of this paper is the development of a model of transformational leadership's impact on knowledge creation in diverse teams based on a series of propositions. Focusing on the information/decision-making perspective, our model depicts the role of leadership in facilitating constructive cognitive effects on knowledge creation. From the social categorisation perspective our model depicts the role of leadership in mitigating against destructive emotional barriers to group effectiveness.\nResearch limitations/implications: - This paper's review and model indicate significant implications for future research and practice. The model provides a better understanding of the effects of transformational leadership on knowledge creation. The paper present justification for two pathways, cognitive and affective, through which leadership moderates the impact of diversity on team outcomes. By examining the model and its proposed relationships, we believe that it will enable researchers to shed new light on these issues for further investigations.\nOriginality/value: - This review is among the first to address the role of transformational leadership in diverse teams, and one of the only papers to connect leadership, diversity and knowledge creation.\n",
"role": "assistant"
}
] |
Wage growth and occupational mobility in Spain: movers vs stayers
|
[
"Employment",
"Wages"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The role of labor mobility as a mechanism to improve the allocative efficiency of workers has been one of the main topic in labor economics for the last years. A particular subject that has attracted the interest of numerous studies is the effect of job mobility on the workers' wage growth. Concerning inter-firm job mobility, the job search model (Mortensen, 1986) suggests that if the workers' productivity is known ex ante, the effect of job mobility on wages should disappear once the personal and job characteristics were controlled in the wage equation. Instead, the job-matching model (Jovanovic, 1979) and the raiding model (Lazear, 1986) argue that if the productivity is unknown ex ante, a significant effect of job mobility on wages is possible.\n2. Model and econometric specifications: This section develops the econometric models used to examine the factors affecting occupational mobility and, subsequently, its influence on workers' wage growth. To achieve these aims, first, it is necessary to define a variable that shows downward and upward occupational changes, and not occupational change. Through the LCS, it is possible to generate a proxy to this variable, since occupations are coded according to the National Classification of Occupation (CNO-94), which establishes an occupational hierarchy based on the performance area and qualification required to carry out the job[2]. With the exception of the employees of the armed forces, the first code would correspond to the managers and the last one to the unskilled workers. In this way, occupational mobility is defined as movements among occupations at the one-digit level, since the LCS does not allow further disaggregation. The three possible situations, in which a worker i at period t+1 can be found with respect to period t, are the following: the individual registers a downward occupational mobility, remains in the same occupation or experiences an upward occupational mobility. These three settings are explained through a random effect panel multinomial logit model to account for the unobserved individual heterogeneity. In this model, the probability of being in the state s (s=1, 2, 3) conditional on observed characteristics Xit (that, initially, vary between individuals and over time) and on unobserved individual effects, ais, has the following expression[3]:(1)\n3. Data: The data set used in this paper comes from the LCS provided by the National Statistics Institute (INE). The LCS replaces the European Community Households Panel Data for Spain (INE, 1994-2001) and collects information with both transverse and longitudinal dimension. This study employs the longitudinal component that follows the individuals during consecutive years. Specifically, the structure of the longitudinal data from the LCS corresponds to a rotating panel. This panel is composed of four subpanels each year, whose maximum duration in the survey is four years, so one of the subsamples is replaced by another one each year. The period covered by this analysis corresponds to the years 2005-2010.\n4. Results: This section is dedicated to the presentation of the results obtained from estimating the models specified in Section 2. Focusing on the odds ratio from the random effect panel multinomial logit model that appear in Table VIII, first, it is observed that inter-firm job mobility exerts a positive and significant importance when it comes to occupational mobility (both for downward and upward). This result is coherent with the predictions of the theory of career mobility formulated by Sicherman and Galor (1990), where the change of firm can be considered as an optimum decision to maximize the labor future earnings. So, workers decide to leave their firms to be promoted in an outside company, or they may accept a downward occupational mobility if the probability of future promotions is higher in the new firm than in the previous one. Second, among the group of movers, it is found that leaving the company voluntarily has positive effects on their careers, once the effects of the remaining regressors are controlled. In particular, the probability of upward occupational mobility for quitters duplicates the one corresponding to the rest of movers. They are likely to undertake jobs while employed, whereas the laid-off worker may engage in job search after separation, which increases the probability of the former receiving better job offers. Moreover, according to the signalling theory (Blanchard and Diamond, 1994), layoffs have more probability of downward occupational mobility, as employers are unable to observe the productivity of job applicants, so they decide to offer worse jobs to unemployed people.\n5. Conclusions: This paper has shed empirical evidence on some aspects of the factors affecting career success of workers in Spain, using data from the LCS (Instituto Nacional de Estadistica (INE), 2005-2010). In particular, the issue analyzed has been the causal links between inter-firm mobility, occupational mobility and wage growth. From a descriptive point of view, it has been proven that movers have the highest percentages of occupational change and, among movers, quitters are more likely to move up the occupational ladder. These results hold after estimating the random effect panel multinomial logit model and are coherent with the predictions of the theory of career mobility proposed by Sicherman and Galor (1990), that is, inter-firm mobility is a mechanism used by the worker to achieve upward mobility. This conclusion would justify the implementation of active labor market policies that increase the market transparency, decrease the information asymmetries between workers and employers and improve the adjustment process between job offer and demand. Another interesting result is that the accumulation of human capital has a positive and significant influence on promotions both for movers and stayers, getting the highest probability of upward occupational changes for workers with higher education. This proves that university education has an insufficient career-specific component in Spain, that is, work-related skills are not obtained absolutely via pre-work training in the educational system.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to analyze the causality between job mobility status, occupational career and wage growth. First, it will be verified whether the type of job-to-job mobility has some influence on the occupational mobility and, second, whether the type of occupational change has some effect on wage growth for both movers and stayers.\nDesign/methodology/approach: The influence of job-to-job mobility on occupational mobility is explained through a random effect panel multinomial logit model to account for the unobserved individual heterogeneity. Next, the effect of occupational mobility on wage growth for both movers and stayers is obtained after applying Heckman's two-step procedure that corrects the endogeneity of occupational mobility in a panel data framework.\nFindings: The main results confirm, first, that inter-firm mobility is a mechanism used by the worker to achieve upward mobility. Second, the accumulation of human capital has a positive influence on promotions for movers and stayers, getting the highest probability of upward occupational mobility for workers with higher education. Moreover, promotion is a suitable mechanism for improving worker's wages. In particular, the wage gain is between 5 percent for stayers and 9 percent for quitters compared to workers who do not change their occupation.\nSocial implications: The main findings of this research would justify the implementation of active labor market policies that increase market transparency and decrease information asymmetries between workers and employers. In this way, the adjustment process between job offer and demand would improve and workers would have more possibilities of ascending on the occupational ladder and getting better-paid jobs.\nOriginality/value: To the best of the author's knowledge, this paper is the first study in the economic literature discussing Spain that examines the links between change of firm, occupational mobility and wage dynamics.\n",
"role": "assistant"
}
] |
Socio-economic factors determining the way e-tourism is used in European Union member states
|
[
"European Union",
"ICT",
"E-Services",
"Consumer behaviour",
"E-Tourism"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The influence of tourism on the economy of different regions has been studied repeatedly (Ahlert, 2007; Kucukaltan and Terzioglu, 2013; Lew, 2011; Oosterhaven and Fan, 2006; Paicu and Hristache, 2013; Zhang et al., 2007). Travel and tourism's (T\nLiterature overview: ICTs are currently one of the most significant factors contributing to socio-economic development, or the development of civilization in a broader sense. Their dynamic expansion influences contemporary economies and societies, transforming nearly all areas of life. The development of ICT is a source of a type of technological shock, which, having shaken the foundations of production models of information, knowledge, and culture, has given birth to a period of socio-economic transformation (Benkler, 2008, p. 399). Gradually, the expansion of state-of-the-art technologies is altering business activity by influencing the relationships among companies and suppliers or clients as well as production processes, cooperation with other companies, and means of financing companies (Castells, 2003, p. 77).\nResearch objectives: The main objective of the following paper is to test the frequency of the use of e-tourism by consumers/internet users in European Union member states and to identify the socio-economic variables that determine this frequency. Identification of these factors will allow for even greater development of e-tourism in European Union member states, and consequently might safeguard this sector against the difficulties it faces every now and again. It seems that research results may be transferred across other sectors of industry employing ICT in their doings as well.\nMethodology of research: The socio-economic variables (that were mentioned in the research objectives) that may determine the use of e-tourism include factors such as: age, sex, education level, occupation, professional position, and country of residence. To achieve the objectives of this study, relationships between these variables and the frequency of the use of e-tourism services by consumers in 28 countries in Europe (27 European Union member states and Croatia, which became a member state on January 1, 2013) were analyzed. The analysis was conducted with an SPSS program and was based on secondary data generated in a survey commissioned by the European Commission. The survey was conducted between November 15 and December 7, 2012 in 32 European Countries. For the execution of the survey, an online panel survey approach was used to target and identify users of e-government services. For each of the 32 benchmarked countries, a representative sample of internet users was surveyed via local online panels to which the provider consortium had access. Due to the lack of an accessible online quality research panel, a CATI approach was used in Cyprus. The following steps formed part of the user survey and online data-gathering process: the preparation of the online questionnaire; translation of and feedback on the final version of the survey instrument; and validation of the translations by the European Union Member State representatives. All translated versions of the questionnaire were programmed, tested, and published online. Based on the methodological and budget considerations of the study, a sample size of 1,000 respondents was targeted for the larger countries (n=1,000; 95 percent reliability, maximal theoretical CI=+-3,10). For the five smallest countries (Croatia, Cyprus, Iceland, Luxembourg, and Malta), a sample size of 200 respondents was targeted (n=200; 95 percent reliability, maximal theoretical CI=+-6,93). In these smaller countries, no user panels existed. It was necessary to perform an online survey as it was not possible to use the CATI approach. Thus, a total sample of n=28,000 citizen respondents was necessary for the whole survey (European Commission, 2012a).\nResearch results: One of the main areas of analysis was the frequency of using e-tourism services in the whole population under analysis and in the populations of particular European Union member states. The largest share of respondents, amounting to 49.6 percent (n=12,379), used e-tourism services at least once a year but not every month. A statistically significant relationship was found between the frequency of using e-tourism services and the respondent's country of residence, kh2= 1928.83; df=108; p\nDiscussion: Research has shown that the frequency of buying tourism services via the internet varies according to the country of residence of an internet user. This may be due to a distinct attitude to new technologies expressed by residents of particular countries, which is determined by factors such as cultural differences. It may also result from the level of prices of tourism services. As mentioned above, price is an important factor that contributes to decisions to buy tourism services via the internet (Elhaj, 2012).\nConclusions and recommendations: The performed research allowed to draw a number of conclusions and formulate practical recommendations:\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to test the frequency of the use of e-tourism by consumers/internet users of various European Union member states and to identify the socio-economic variables that determine this frequency.\nDesign/methodology/approach: The secondary data regarding the use of e-tourism services in 28 European Union member states were used for analysis. Relationships between variables and the frequency of the use of e-tourism services by consumers in 28 countries in Europe were analyzed. For each of the 32 benchmarked countries, a representative sample of internet users was surveyed via local online panels. Also CATI approach was used in one country.\nFindings: The authors of this study identify a statistically significant relationship between the frequency of the use of e-tourism by internet users and their country of origin. Statistically significant relationships between the frequency of using e-tourism and socio-economic variables, such as age, sex, education level, occupation, and professional position, are also identified.\nPractical implications: The findings of this study can provide tour operators with useful suggestions on how to extend the utility of internet (e-commerce) in order to develop their business, get more customers and improve their profitability. Simultaneously, these results may contribute to enhancing competitiveness of the whole European Union and increasing its gross domestic product.\nOriginality/value: With the worldwide rapid growth of internet (and e-commerce) many benefits can be reach by tourism sector especially in the growing competition. The paper is unique because it shows data collected from almost 25,000 respondents and it presents a comparison of the use of e-tourism among citizens of 28 European Union member states.\n",
"role": "assistant"
}
] |
Effective virtual new product development teams: an integrated framework
|
[
"Virtual work",
"Team working",
"Product development"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this issue.\nIntroduction: It is widely acknowledged that new product development is a critical activity in firms. The processes involved in the continuous development of new products are considered the \"lifeblood\" of firms and \"engines\" for growth, profitability, and survival (Atuahene-Gima, 2003; Biemans, 2003). As Gordon et al. (1997, p. 33) maintain, \"New product development may hold the key to solving organizations' biggest problems - those associated with gaining and maintaining competitive advantage\". Therefore, new product development teams (hereafter, NPD teams) are crucial components of a firm's overall marketing strategy (Sarin and McDermott, 2003).Radical transformations in the way teams are structured are occurring within firms. First, firms are developing flatter, more decentralized, and more adaptive organizational structures. These new structures encourage managers to develop more flexible job designs, which result in an increasing number of employees performing their duties off-site. Second, to compete more effectively, firms are collaborating more actively with global partners and drawing on resources that are often geographically dispersed. Consequently, team-building efforts are transcending organizational and national boundaries. Third, advances in technology are enabling firms to interact and communicate with business partners in new cost-effective ways (e.g. extranets and mobile technologies). These changes are fuelling the emergence of virtual teams (i.e. teams that use technology to work across locational, temporal, and relational boundaries to accomplish interdependent tasks) (Aubert and Kelsey, 2003; Bell and Kozlowski, 2002; Hertel et al., 2004; Lipnack and Stamps, 1999; Townsend et al., 1998).Given the complexities involved in organizing face-to-face interactions among team members and the advancements in electronic communication technologies, firms are turning toward employing virtual NPD teams (Jacobs et al., 2005; Schmidt et al., 2001). However, insights on such teams are scarce in the marketing literature. Systematic research on virtual NPD teams is necessary because:* the success rate of even traditional, co-located teams is low, with just 5 percent of such teams meeting desired performance goals (Bensor-Armer and Hsieh, 1997);* failure rates in NPD teams remain high because of misapplication and mismanagement (Sarin and McDermott, 2003); and* given the unique challenges that virtual interactions entail, managing virtual teams is more complex than managing co-located teams (Kayworth and Leidner, 2001/2002).While authors have expounded on the many potential benefits of virtual teams, less research has focused on understanding the factors that influence virtual NPD team effectiveness. As Montoya-Weiss et al. (2001, p. 1251) maintain:In particular, new technologies are providing the means for work that is dispersed (carried out in different places) and asynchronous (carried out at different times). Establishing links and connections is no longer a question of technical feasibility. The key question is, How can organizations create virtual teams that work effectively?Ocker (2005, p. 22) echoes, \"Since virtual teams constitute an important and pervasive organizational structure, research with the aim of improving the effectiveness of these teams is vital\".Urging further research on virtual teams, Martins et al. (2004, p. 823) emphasize that \"researchers need to draw on the theoretical foundations that have been used in prior research on teams, as well as on new theoretical bases that are uniquely relevant to virtual interaction, to develop a more theoretically grounded understanding on the functioning of virtual teams\". Accordingly, this article develops a conceptual framework that integrates views from diverse literature streams to outline key factors influencing virtual NPD team effectiveness (see Figure 1). In the following section, readers are given a general background on both virtual and NPD teams. Subsequently, an integrated conceptual framework is developed and a series of testable research propositions are presented. Finally, the implications of the integrated conceptual framework are discussed and guidelines for future research are provided.\nAn integrated conceptual framework and research propositions: Background\nConclusion and directions for future research: The use of virtual teams is becoming more prevalent in organizations. One area in which the uses of virtual teams show promise is new product development. Systematic research in this area is just emerging. Therefore, much remains to be known about effectively managing virtual teams. This article presents an integrated framework that identifies key factors that influence virtual NPD team effectiveness. The framework combines the literatures on virtual teams, NPD teams, resource-based theory, relationship marketing, knowledge management, interpersonal interaction, and communication. According to the framework, the nature and frequency of communication in virtual NPD teams fosters the formation of relationships among team members, promotes knowledge acquisition and integration, and facilitates interpersonal networking among the team members. These, in turn, have a direct bearing on the virtual team climate. That is, they affect the quality and speed of product development decisions. Ultimately, team climate enhances NPD effectiveness, which is characterized by creativity, innovativeness, and NPD speed. Note that this framework is an initial step in identifying key factors that influence virtual NPD team effectiveness. The framework still requires empirical validation.Because of the relative newness of virtual teams, many areas of research have not been examined. First, research has not looked into the effectiveness of virtual NPD teams in developing various types of new products (e.g. incremental, continuous, radical, etc.). Second, because NPD involves increasingly the involvement of other organizations (e.g. major customers, key suppliers, manufacturers of complementary products, and competitors) researchers should compare the effectiveness of intra- versus inter-organizational virtual NPD teams. Third, researchers need to examine the usefulness of virtual teams in other contexts, such as customer relationship management (CRM). Fourth, specific team characteristics need to be identified that improve the development and maintenance virtual interfaces.Fifth, because team leaders often play a critical role in NPD teams (Sarin and McDermott, 2003), their role should be explored in a virtual NPD context. As Kayworth and Leidner (2001/2002, p. 30) state, \"Virtual team leaders rated as effective by their members, demonstrate first and foremost a mentoring quality characterized by concern for the members, understanding, and empathy\". Finally, the concept of task complexity needs to be investigated because it may have critical implications for virtual teams as it \"sets constraints on the design characteristics of virtual teams and therefore influences the leadership functions that will be critical to the team's effectiveness\" (Bell and Kozlowski, 2002, p. 19).\n",
"role": "user"
},
{
"content": "Purpose: - Research on virtual teams is still in its nascent stages. The purpose of this paper is to develop a theoretically grounded integrative framework of key factors influencing the effectiveness of virtual new product development teams.\nDesign/methodology/approach: - The framework is developed by integrating perspectives from several research streams, including relationship marketing, new product development, knowledge management, the resource-based view, virtual teams, innovation, and communication.\nFindings: - Factors impacting effective virtual interactions (i.e. improved decision quality and decision speed) and new product development (i.e. increased levels of creativity, innovativeness, and product development speed) are proposed.\nResearch limitations/implications: - Guidance is provided for managing virtual new product development teams. The paper offers testable propositions that can serve as a foundation for further research in this promising area.\nOriginality/value: - By synthesizing relevant perspectives from diverse literature streams, the paper offers a new framework for understanding and improving the functioning of virtual new product development teams.\n",
"role": "assistant"
}
] |
Critical factors for knowledge management in project business
|
[
"Knowledge management",
"Project management",
"Critical success factors"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Knowledge is now universally recognised as a critical competitive asset, and interest in knowledge management has therefore increased in most companies. At the same time, more firms are organising their business in terms of projects; indeed, project-based business has become an accepted business strategy among the range of potential business strategies available to firms (Prencipe and Tell, 2001). Taken together, a commitment to effective knowledge management in the context of a project-based business strategy is emerging as a potent means of establishing and sustaining competitive advantage.It is therefore not surprising that corporate spending on knowledge-management initiatives has increased significantly in all forms of business (including project-based businesses) in the past decade (Ithia, 2003). Organisations are implementing a range of initiatives to identify, share, and exploit their knowledge assets in accordance with a knowledge-based view of the firm in which knowledge is acknowledged as a key sustainable competitive resource (Kogut and Zander, 1992). Nevertheless, many project-based businesses lack the expertise to handle their knowledge assets (especially those gained from experience of previous projects); indeed, most knowledge-management initiatives in project-based firms have failed for a variety of reasons (including technological, cultural, knowledge content, and project management reasons) (Chua and Lam, 2005).The present study therefore attempts to identify and examine the critical factors that facilitate and/or impede knowledge-management initiatives in the context of physical project teams. The remainder of the paper is organised as follows. The next section presents a literature review of the key concepts of project-based business, knowledge management, and knowledge-management initiatives. Section 3 proposes a model of six critical success factors identified in the literature review. Section 4 presents the methodology and results of an empirical examination of the proposed model in the context of project-based organisations in Finland. Section 5 discusses the significance of the findings. Section 6 suggests certain implications for project managers flowing from the present study. The paper concludes with a summary of the major conclusions.\n2. Literature review: 2.1 Project-based business\n3. Conceptual framework: On the basis of the literature review presented above, a conceptual model of the factors that influence the success of KM initiatives in a project-based context is proposed by the present study. As shown in Figure 1, the proposed model consists of six distinct factors:1. familiarity with KM;2. coordination among employees and departments;3. incentive for knowledge efforts;4. authority to perform knowledge activities;5. system for handling knowledge; and6. cultural support.Each of these factors is discussed in more detail as follows.3.1 Familiarity with KM\n4. Empirical study: 4.1 Sample and data collection\n5. Discussion: The results of the study have revealed that the absence of incentives for employees who engage in KM initiatives was the most significant barrier to the success of such initiatives in the project-based firms studied here. The results suggest that senior management should offer suitable incentive schemes for employees to engage in KM initiatives if they want to increase the likelihood of success in such initiatives.The second most significant barrier to success in KM initiatives was the absence of a proper system to handle knowledge in the project-based organisations. The majority of respondents felt that there was no adequate system in their organisations to manage knowledge efficiently. It is apparent that appropriate KM systems in project-based businesses would be a significant factor in assisting KM initiatives to flourish. Such a system would facilitate the sharing of experience among employees through an integrated interface platform accessible to all interested participants in a project.A lack of coordination among employees and departments was the third most significant barrier to KM initiatives, while a lack of familiarity was the fourth most significant. It would seem that proper coordination among employees who are made familiar with the objectives and methods of KM would enhance the likelihood of success in KM initiatives.A lack of cultural support was only the fifth most significant barrier identified in the present study. It is apparent that a lack of cultural support was not perceived by the present respondents as being as significant as many of the other factors. Nevertheless, cultural support remains a fundamental element in any successful KM initiative. A culture of mutual trust and assistance encourages team members to depend on one another and the information they share, thus increasing the likelihood that they will communicate openly and effectively to achieve their shared goals.Finally, a lack of authority to perform knowledge activities was the least significant barrier to the success of KM initiatives. It is likely that the respondents perceived knowledge as a personal resource to use as they saw fit and that they therefore did not see any need for overt authority to share their personal knowledge. Nevertheless, it remains true that senior management should overtly encourage and authorise the sharing of personal knowledge in an attempt to enhance the skills and expertise of as many members of project teams as possible.\n6. Implications for project managers: The findings of this study have several important implications for project managers who wish to initiate successful KM practices within their projects.First, managers should formulate an attractive incentive package to motivate project members to engage in KM initiatives. Such an incentive system should also encourage members to suggest ideas for new KM opportunities for the project team.Second, managers should arrange seminars or workshops to familiarise project members with the basic objectives and methods of effective KM. It is apparent that employees cannot make meaningful contributions to KM unless they are familiar with the aims and processes that it entails. These seminars and workshops should make team members aware that the knowledge they possess is a valuable resource that must be managed in a sophisticated way to benefit individuals, the team of which they are part, and the organisation as a whole.Third, KM without coordination is more than difficult; it is almost impossible. Project managers should always ensure interdepartmental coordination to manage their KM initiatives successfully.Finally, managers need to foster an organisational culture that encourages participation in KM initiatives and assists all project members to perform their activities to the best of their ability.\n7. Conclusion: This study has examined the critical success factors for KM initiatives in project-based organisations. Drawing on the suggestions of various researchers in recent years, the study has proposed a conceptual model of such factors. Six factors have been identified and included in the model:1. familiarity with KM;2. coordination among employees and departments;3. incentive for knowledge efforts;4. authority to perform knowledge activities;5. system for handling knowledge; and6. cultural support.The findings of the empirical study have revealed that the absence of incentives and the lack of an appropriate system are the most significant barriers for successful KM initiatives in projects. A lack of inter-departmental coordination and unfamiliarity with KM were other significant barriers. A lack of authority to manage knowledge and an absence of cultural support were the least-significant barriers to successful KM initiatives in the project-based organisations studied here.The most important implications for project managers arising from the present study are that successful KM initiatives require appropriate incentives for team members and a user-friendly information system that facilitates the sharing and management of knowledge among all project participants. However the generalization of the findings may be partial by reason of limited empirical study sample.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this study is to identify and examine various factors that influence the success or failure of knowledge management (KM) initiatives in project-based companies.\nDesign/methodology/approach: - Following a literature review, the study proposes a conceptual model of six factors of potential importance to the success of KM initiatives. The model is then examined through an online survey of project managers and assistant managers from project-based businesses in Finland.\nFindings: - The study finds that a lack of incentives and the absence of an appropriate information system are the most significant barriers to successful KM initiatives in projects.\nResearch limitations/implications: - The findings of the study may be restricted in terms of generalisability because of the limited empirical study.\nPractical implications: - Project managers should formulate an attractive incentive package to encourage project members to participate in KM initiatives and to suggest ideas for new KM opportunities. Managers should also ensure that an effective user-friendly information system is in place before introducing KM initiatives.\nOriginality/value: - The study proposes a new model of critical success factors for KM initiatives in the context of project-based business.\n",
"role": "assistant"
}
] |
Strategy and organisational cybersecurity: a knowledge-problem perspective
|
[
"Strategy",
"Complexity",
"Epistemology",
"Intellectual capital",
"Systems theory",
"Knowledge-problem",
"Cybersecurity theory"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: 1.1 Information technology and the cybersecurity problem\n2. Knowledge, strategy and cybersecurity - a theoretical precursor: Throughout the last three decades, interpretations of knowledge as a construct have underpinned several core strands of strategic management and organisational theory. Examples include the knowledge-based view of the organisation (Curado and Bontis, 2006), Dynamic capabilities (Eisenhardt and Martin, 2000; Zahra and George, 2002; Arend and Bromiley, 2009), and knowledge management (Nonaka and Peltokorpi, 2006; Ragab and Arisha, 2013). However, the utility of such attempts has been put into question for reasons which include an unclear or contended interpretation of knowledge (Wilson, 2002), varied degrees of perceived empirical utility (Edvardsson, 2009; Ibrahim and Reid, 2010), overly divided themes which dilute the original vision of progress (Davenport, 2015) and, eventually an inability to circumvent Occam's razor (Wilson, 2002).\n3. The cybersecurity strategy knowledge problem: Throughout the previous section we have proposed that the strategic utility of knowledge is inferential, as it guides system behaviour towards desired future states which, by definition, cannot be directly observed. Management strategies must infer the value of the procedures/processes employed in relation to an objective based on available evidence. The effectiveness of local measures of uncertainty mitigation must be evaluated. This applies even to adaptive strategies which rely on a suppression of inference due to a distrust in the accuracy or the utility of the process. As a result, the notion of a \"knowledge problem\" in relation to cybersecurity strategy can be perceived as a descriptor of a hostile context for effective inference. This can entail the absence of observable ontological regularities, or limitations in the ability to acquire and use relevant information concerning the vulnerability base, relevant threat behaviour and the potential effects of cybersecurity incidents on business or society.\n4. Conclusions: Cybersecurity management is a challenging dimension of the modern organisational landscape, and requires consideration within strategies, values, structures and practices. Its importance is exacerbated within organisational models which leverage intangible assets and intellectual capital as the primary platform for value generation. We argue that cybersecurity strategy is a multifaceted construct which benefits from a knowledge-centric narrative. From this new perspective, a cybersecurity strategy built upon effective knowledge management practices has the potential to channel the organisation's intellectual assets and their operationalisation towards value creation. This paper has set out the basis for such a narrative and highlighted the need for a pragmatic view of knowledge which entails emphasis on \"belief-(claim)-action-result\" rather than \"information-truth-belief-action\" when preparing for, dealing with, and recovering from cybersecurity incidents. By anchoring it to action, we have presented strategic knowledge as a local construct bound in its scale, scope and time value.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to frame organisational cybersecurity through a strategic lens, as a function of an interplay of pragmatism, inference, holism and adaptation. The authors address the hostile epistemic climate for intellectual capital management presented by the dynamics of cybersecurity as a phenomenon. The drivers of this hostility are identified and their implications for research and practice are discussed.\nDesign/methodology/approach: The philosophical foundations of cybersecurity in its relation with strategy, knowledge and intellectual capital are explored through a review of the literature as a mechanism to contribute to the emerging theoretical underpinnings of the cybersecurity domain.\nFindings: This conceptual paper argues that a knowledge-based perspective can serve as the necessary platform for a phenomenon-based view of organisational cybersecurity, given its multi-disciplinary nature.\nResearch limitations/implications: By recognising the knowledge-related vectors, mechanisms and tendencies at play, a novel perspective on the topic can be developed: cybersecurity as a \"knowledge problem\". In order to facilitate such a perspective, the paper proposes an emergent epistemology, rooted in systems thinking and pragmatism.\nPractical implications: In practice, the knowledge-problem narrative can underpin the development of new organisational support constructs and systems. These can address the distinctiveness of the strategic challenges that cybersecurity poses for the growing operational reliance on intellectual capital.\nOriginality/value: The research narrative presents a novel knowledge-based analysis of organisational cybersecurity, with significant implications for both interdisciplinary research in the field, and practice.\n",
"role": "assistant"
}
] |
The value of sociality
|
[
"Utility",
"Risk attitudes",
"Sociality"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Do people make purely rational decisions guided by their economic utility? Or is human \"sociality\" a factor in decision-making? Do people enjoy making decisions? Do people make decisions with other people using different processes of decision-making that result in different outcomes than when they make these decisions individually? If there is a difference in perceived value between decision-making that involves similar social environments, then could there be an economic value that can be placed on such an environment and if so, how does it affect decision-making and, especially, the decision-making under risk and uncertainty.\n2. Theory: 2.1 Predictions of the EUT\n3. Methods: In this section, we argue that for certain conditions neither EUT, nor prospect theory works and a new theory, the theory of \"sociality\", described in the previous section, should be implemented. These necessary conditions are the basis for the design of the experiments and include:\n4. Data: Original data were collected with the help of z-Tree (Zurich Toolbox for Readymade Economic Experiments) software package. The same program code was used for experiments in the USA, Russia and New Zealand. The data reflect observations on each stage of the game, including PD game over time. Observations reflect the choices of different sets of 12 participants over 18 experiments. Each experiment contained up to 20 periods. The participants were not informed of the total number of experimental rounds.\n5. Results: The results are presented in the order of the hypotheses. Based on the analysis, we do not reject any of the hypotheses. There is clearly an economic value derived from being around other people. How does this economic value vary across domains and countries?\n6. Discussion: Another goal of our research is to show that the theory of \"sociality\" can provide a number of insights into the problems of social sciences and business. Our theory relaxes the assumption of rationality and, as a result, leads to better predictions about decisions in the social domain without giving up mathematical rigor. Below we focus on some of the applications for social science and business and discuss their relevance for the broader social sciences literature.\n7. Conclusion: This paper has developed a framework for studying the individual decisions of participants that are made in the social context under the assumption of low stakes. In general, the results of the experiments were consistent with predictions derived from the theory of \"sociality\" and the economic model of general utility proposed. First, the new risk attitudes were confirmed. Unlike EUT and prospect theory advocate, we found risk tolerance in the presence of \"sociality\" regardless of framing the experiment as losses or as gains. Second, the value of \"sociality\" was estimated. This result confirms that \"sociality\" has a definitive value for some countries and definitive minimum value for others when stakes are low. This finding also suggests that the utility function is misspecified and that there is a need to define and formalize social utility. Moreover, when stakes are low \"sociality\" is able to compensate for monetary loss. In particular, the participants equilibrate their twofold profit to \"sociality\". Therefore, in the social context, when stakes are low, human beings value the social interaction more than the monetary outcomes (losses) of such interaction.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to study the value of sociality. Recent experimental evidence has brought to light that the assumptions of the Prospect Theory by Kahneman and Tversky do not hold in the proposed substantive domain of \"sociality\". In particular, the desire to be a part of the social environment, i.e. the environment where individuals make decisions among their peers, is not contingent on the framing. Evolutionary psychologists suggest that humans are \"social animals\" for adaptive reasons. However, entering a social relationship is inherently risky. Therefore, it is extremely important to know how much people value \"sociality\", when the social outcomes are valued more than material outcomes and what kinds of adaptations people use.\nDesign/methodology/approach: We develop a new theory and propose the general utility function that features \"sociality\" component. We test the theory in the laboratory experiments carried out in several countries.\nFindings: Our results suggest that when stakes are low the theory of \"sociality\" is successful in predicting individual decisions: on average, people do value \"sociality\" and it surpasses the monetary loss.\nOriginality/value: The main contribution of this paper is the breakdown of the risk attitudes under low stakes and individual level of decision-making. Another advancement is the ability to formalize the social utility or the theory of \"sociality\" in an economic model; we use general utility function that we define both on the outcomes and on the process of the decision-making itself and test in laboratory studies.\n",
"role": "assistant"
}
] |
Entrepreneurial decision-making: new conceptual perspectives
|
[
"Entrepreneurial mindset",
"Decision making",
"Socially situated cognition",
"Entrepreneurial cognition",
"Recognition-primed decision-making",
"Naturalistic decision-making"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The introduction of the concept of decision making drastically altered the ways in which managers thought about their actions (Buchanan and Connell, 2006; Pettigrew, 2014). The term \"decision\" was coined to indicate that the end of deliberation has been achieved and that the implementation of action has begun (Buchanan and Connell, 2006). As a result of complex and changing business environments, most of the decisions made in the real world are made in situations in which the decision maker cannot possibly know all the facts and the consequences (Zimmermann, 2011). Given that research is focusing on how accurate decisions can be made in uncertain conditions, it is no surprise that researchers in the entrepreneurial field are interested (Shepherd and Rudd, 2014).\nDecision making: Two main models are examined in relation to the concept of decision making: normative models which focus on the consequences of choosing between alternatives and prescriptive models which focus on the determinants of those choices (Beach and Lipshitz, 2017). The most basic process of decision making follows four steps: recognition of the present state, evaluation of the immediate options, selection of an action based on one's needs and re-evaluating the option based on the outcome (Beach and Lipshitz, 2017; Doya, 2008). While Doya (2008) suggests that this normative process may not always be explicitly followed, it is useful to understand the decision process in its simplest form.\nRationality and bounded rationality: Theory of rational choice\nCognitive biases and emotion: Simon et al. (2000, p. 113) define cognitive biases as \"common types of mental shortcuts used to make judgements\". These help individuals overcome their cognitive limitations but result in less rational and less comprehensive decision making. Emotions are what lead individuals to develop and rely on cognitive biases because different emotions elicit different cognitions. As a process of evolution individuals' information processing capacity has increased, which has led to decision makers attempting to make more rational choices when faced with a decision; however, even though there is an attempt at rationality, emotion and cognition are very influential in decision making (Lerner et al. 2015; Pfister and Bohm, 1992).\nSocial cognition theory: Social Cognitive Theory (SCT) came about when a paradigm shift changed the focus from trying to understand human behaviour, to understanding the cognitions that drive human behaviour (Schwazer and Luszczynska, 2005; Wyer and Shrull, 2014). The theory holds that humans feel a higher sense of commitment towards a decision if they perceive that they are in control of the situation (Schwazer and Luszczynska, 2005). Until this paradigm shift, human behaviour was thought of as being under conscious control. However, SCT recognised that a large portion of human cognition operates implicitly (Greenwald and Banaji, 1995; Wyer and Shrull, 2014). The main feature of implicit cognition is that past experience influences judgement in conscious situations (Greenwald and Banaji, 1995). This is in line with the theory of NDM which states that decision makers draw on past experiences and judgements to make sense of and to find a plausible course of action in current situations (Klein, 1993).\nNaturalistic decision making and recognition-primed decision making: NDM was introduced as a means by which decision makers who are experts in their respective fields could make decisions given various parameters based on a more intuitive approach (Lipshitz et al., 2001; Zsambok, 2014). While it is noted that NDM favours the decision making of expert entrepreneurs who can match their cognitive capabilities to the nature of the decision, it does not exclude novice entrepreneurs, since they will learn this skill through the progression of their careers (Gustafsson, 2006). Lipshitz and Strauss (1997), as well as Klein (2015), established that most decision scenarios include some form of uncertainty and risk, which is said to disincentive decision makers from actually making the decision. In a real-world scenario this may not always be possible, hence NDM takes these factors into consideration and separates the concepts of uncertainty and risk.\nEntrepreneurial cognition and socially situated cognition: EC refers to \"the knowledge structures that people use to make assessments, judgements or decisions involving opportunity evaluation, venture creation, and growth\" (Randolph-Seng et al., 2014, p. 1). EC concurs with the premise that decisions are made in bounded rationality given that individuals' knowledge structures are comprised of their cognitive structures which are developed through emotion. Early researchers theorised that mental processing could influence the way reality was perceived, which, in turn, could result in changed behaviours and actions towards how decisions were processed. However, the first real theories came from authors such as Mead (1934) whose social behaviourism theory was in line with the constructivist paradigm suggesting that our perceptions influence our behaviour but that the way we perceive the world is automatic and non-conscious (Mitchell et al., 2007). This gives context to the notion that when making decisions individuals can be affected by biases that they are unaware of, leading to flawed decision making. Later authors adopted a more behaviourist approach to understanding cognition and started to separate conscious and non-conscious influences of behaviour, stating that we are unable to dictate or control the non-conscious elements of our environment but that if mediating factors are available these can be utilised to shape our perceptions (Mitchell et al., 2014). Furthermore, the work of Mitchell et al. (2014) agree with the premise put forward by Klein (2008) in that the more expertise entrepreneurs are able to generate, the more entrepreneurial intuition they develop which aids in the formation of perceptions and ultimately guides the decision-making process.\nThe entrepreneurial mindset: The EM developed from the theory of EC and is described as the ability to sense, act and mobilise in uncertain conditions (Haynie et al., 2010). The main premise of EM is the ability of entrepreneurs to engage in multiple cognitive strategies (the use of acquired knowledge) to choose the best one, given their environment, goals and motivation. This is as a result of the dynamic and uncertain environment that entrepreneurs operate within agreeing with the theory of SSC put forward by Smith and Semin (2004).\nSynergistic view of theoretical perspectives: While much research exists on models and the process of decision making in order to appropriately accommodate the complex context that entrepreneurs with limited resources operate in, diverse theoretical lenses should be overlaid to augment conceptual frameworks.\nConceptual framework: Whetten (1989) proposes that in order for a conceptual paper to be considered valid there are seven criteria that must be met, namely What's new? So what? Why so? Well done? Done well? Why now? Who cares? With the main criteria being what's new. This paper satisfies these criteria by providing a new framework that examines the physical and mental aspect of entrepreneurial decision making, which provides a new means of examining the field of entrepreneurial decision making using the recommendations put forward by (Brandon Randolph-Seng et al., 2015) of finding new associations for the situated element of EC. Furthermore, given the dynamic and ever-changing environment that entrepreneurs find themselves in it is important to bring this framework into the field of entrepreneurial research which has been of particular interest to academics in the last decade. Weick (1989) suggested that a conceptual paper should be related to a testable hypotheses rather than testing the hypotheses as the premise of conceptual papers is to offer a bridge between a valid theoretical construct and a useful one.\nScholarly implications and future research: First, the goal of this paper is to enrich the understanding of entrepreneurial decision making. Despite literature dating back as far as the early 1930s (Mead, 1934), there is very limited literature available on the relationship between the decision-making processes and metacognition that forms the foundation of entrepreneurial decision making. Figure 4 presents a theoretical framework on the overlapping relationship between RPD and the EM; and this paper is challenging future research to empirically investigate the association between these two theories to enhance the understanding of how entrepreneurs make decisions, specifically due to the high level of uncertainty. Future research should empirically examine and investigate whether these two broad streams of theories could be combined into a holistic theory for decision making amongst entrepreneurs as well as to identify whether the element of taught metacognition can be brought into the decision-making process.\nConclusion: The aim of this paper is to provide a more holistic understanding of entrepreneurial decision making given the theories of EC and RPD as well as to provide a springboard for further studies on the overlap between decision-making processes and the EC that forms part of decision making.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to call for inclusion of alternative but complementary conceptual perspectives in entrepreneurial decision-making theory and practice.\nDesign/methodology/approach: A conceptual approach, drawing on two sets of theoretical perspectives relating to decision making of entrepreneurs, is adopted.\nFindings: The paper presents a conceptual framework of entrepreneurial decision making utilising the intersection between a metacognitive model of the entrepreneurial mindset and the recognition-primed decision-making theory. The paper argues that the convergence of these theoretical viewpoints provides a selection of decision-making processes for entrepreneurs in an uncertain business environment.\nPractical implications: Decision-making models and tools are available to entrepreneurs; however, the relevance and applicability are restrained by the complexity and uncertainty of business environments in which entrepreneurs operate. New and more inclusive conceptual perspectives are required to improve the accuracy of decision making.\nOriginality/value: The study offers a framework that integrates two diverse theoretical dimensions of entrepreneurial decision making. The findings of this study provide direction for practice and for future research on entrepreneurial decision making. The paper intends to encourage researchers to support a new combined theoretical approach and to help practitioners better understand the reasons for entrepreneurial decision failure.\n",
"role": "assistant"
}
] |
A segmentation of adolescent online users and shoppers
|
[
"Adolescents",
"Motivations",
"Internet shopping",
"Segmentation",
"Cluster analysis",
"Shopping enjoyment",
"Internet",
"Shopping"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\nIntroduction: With a generation of youth now afforded access to the internet in their daily lives whether that be at home, school, the public library, from family or friends, through mobile technologies, or more, marketing researchers are primed to better understand why adolescents go online, and more specifically, how adolescents differ in their motives to explore the medium. An intriguing aspect here is that while adolescents are immersed in this digital revolution, they are, at the same time, in a formative part of their lives. The need to be accepted by friends and simply determining \"who they are\" weighs heavily on their minds (Roseth et al., 2008). Indeed, the internet is very important to young people today - as important as television was to previous generations. In 2011, teenagers in the US alone were projected to spend almost $209 billion online (Foshee, 2010).\nConceptual framework: motivations: Online usage motives\nDescriptor variables: In addition to motivations, the qualitative phase of this research identified key descriptor variables that may be helpful in describing our adolescent online usage/shopping segments. These variables include facilitator variables (online accessibility and online parental attitudes), online shopping involvement and general online usage, online consumer self-efficacy (technical and shopping), family/self-purchase influence variables, and several shopping behaviors. A discussion of each type of descriptor variable follows, which includes the profiling of segments on several demographics (age, race, and gender).\nResearch methodology: This research began with a qualitative phase (Strauss and Corbin, 1998), involving 23 in-depth interviews (13 adolescents; 10 parents), 15 of which were conducted in person at the first author\nFindings: Assessment of motivations and descriptors\nDiscussion and conclusions: The findings from this study, developed from qualitative and quantitative research, identify both shopping and non-shopping online motivations for an adolescent sample in a small town in southeast US, and effectively integrate these motivations to determine the types of online adolescent online users and shoppers. Thus, a unique aspect of this research entails the integration of non-shopping and shopping motivations, which is arguably more representative of the typical adolescent Internet experience. Also, this paper examines a research topic that has been given almost no attention in the services literature (i.e. online adolescent shopping), yet is clearly important. Moreover, this study reminds e-service researchers of a cognitively maturing group that is restricted by some degree from reduced mobility (i.e. cannot drive) to the traditional shopping settings. Further, considering that this sample consisted of non-urban adolescent respondents, the desire to shop online may be even stronger since traditional shopping settings are less prevalent. As such, the online medium has a captive audience, and clearly provides teenagers a higher quality service and shopping experience, allowing autonomy in experience as well.\nMarketing implications: The relevance of the present study arises from the fact that adolescent consumers are important end-customers in their own right. Our results can help e-tailers by providing them with information to allow them to improve and differentially target their e-service quality and messages. For example, adolescents in cluster three, the largest cluster, are more eager than the other two clusters to buy things online that are \"different and cool.\" Thus, targeting them with messages relative to the innovativeness of the product might be fruitful.\nLimitations and future research: Given self-report data by young respondents, concerns include question misinterpretation, accurate recall, and sufficient survey attention. Additionally, we drew the survey sample from a public school system in a small town in the southeast US, where public transportation does not afford young people a personal means of mall shopping. For this reason, this study\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this study is to identify key motivations for adolescents using and shopping on the internet, and to segment the sample based on these motivations.\nDesign/methodology/approach: This research occurs in two phases: a qualitative phase involving interviews with adolescents aged 12-15 and parents of adolescents in this age group; then, a quantitative phase using a survey (n=360) to measure the motivations and other key profiling variables identified in the qualitative phase and the literature review.\nFindings: The research identifies five basic motivations and two shopping motivations influencing adolescent online usage and shopping. Next, a cluster analysis is conducted using the motivations developed and is used to identify segments of adolescent internet users and shoppers, which are subsequently described.\nResearch limitations/implications: The sample of adolescents was taken from a town in the southeast USA. Caution should be taken when generalizing to adolescents outside this region.\nPractical implications: This research identifies for internet marketers the different types of adolescent internet users and shoppers. It also recognizes key motivations that marketers should consider when targeting adolescents.\nOriginality/value: This research is the first to identify important segments of adolescents based on their motivations for online usage and shopping, and builds on a limited stream of research relative to adolescents and internet shopping.\n",
"role": "assistant"
}
] |
The changer and the changed: Evolving theories and practices of change in ADVANCE calls for institutional transformation
|
[
"Organizational learning",
"Intersectionality",
"Gender equity",
"Gendered organizations",
"Theory of change",
"Institutional transformation"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The US National Science Foundation's (NSF) ADVANCE program was announced in 2001 and is still active. This is long-lived for any grants program at NSF, but perhaps especially for a cross-disciplinary program focused on \"broadening participation\" of diverse people and institutions - at that time, a newly explicit goal at NSF (NSF, 1999). Supported by contributions from directorates in the science, technology, engineering and mathematics (STEM) disciplines, the existence of ADVANCE was no small political feat to orchestrate (Rosser, 2017). Its longevity, thus, speaks to NSF's commitment to foster \"the most meritorious ideas\" and nurture \"talent wherever it may be found\" (NSF, 1999, 2008). Within NSF, ADVANCE is also pioneering in its institutional approach to change, seeking to transform institutional structures and cultures to improve education and optimize the use of resources through collaboration and networking. Thus, it offers an institutional change model for other education and diversity initiatives at NSF (James and Singer, 2016; Cordova, 2016).\nConceptual frameworks: To understand the content of the ADVANCE RfPs, we use frameworks that consider inequalities and inequities as outcomes of institutional arrangements - policies, procedures and practices. Acker's (1990, 1998) notion of \"gendered organizations\" treats academic work environments as imbued with hierarchies of inequality that disproportionately and implicitly impact women. For women, academic career pathways in STEM are fraught with obstacles that arise from the structures of graduate school, postdoctoral work and faculty careers (De Welde and Laursen, 2011). These structures are \"[...] guided by hegemonic masculine ideals that inform decisions about employee (faculty) pay and promotion [...] [as well as ways] that women and men academics are segregated to varying degrees into different departments and jobs [...]\" (Bird, 2011, p. 204).\nFindings from analysis of awards data: history of ADVANCE program activity: To provide context to our text analysis of the RfPs, we summarize the history of ADVANCE programming as constructed from NSF's awards database. For conciseness, we summarize these data in Figures 1-3 and Table II, using the following acronyms: IT for Institutional Transformation; PAID for Partnerships for Adaptation, Implementation and Dissemination; PLAN for Partnerships for Learning and Adaptation Networks; ARC for ADVANCE Research and Coordination Network; ALE for ADVANCE Longitudinal Evaluation. Narrative description of these program tracks is provided in the following section.\nFindings from analysis of ADVANCE RfPs: an evolving theory of change: Based on the qualitative analysis of the RfP texts, we describe patterns of consistency and change of two broad types:\nDiscussion: In this section, we consider the broad trends that are evident in the cumulative set of changes observed in the ADVANCE RfPs over time. Some of these are supported by multiple lines of evidence. We also speculate on the processes that led to these changes over time.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine the evolving theories of change of the US National Science Foundation's (NSF) ADVANCE program to increase the representation of women on academic faculties in science, technology, engineering and mathematics (STEM). ADVANCE supports efforts to transform the cultures and structures of US institutions of higher education by removing gendered barriers to STEM faculty women's employment, advancement and success, and by developing change strategies that others may adopt.\nDesign/methodology/approach: The empirical study is based on qualitative, longitudinal analysis of nine requests for proposals (RfPs) for the ADVANCE program (2001-2016), complemented by historical analysis of funded ADVANCE projects using public records.\nFindings: The analysis identifies changes over time that suggest shifts in NSF's rationale and theory of change for ADVANCE. Increased guidance directs how institutions should best undertake change, document outcomes and share best practices. The RfPs reveal growing attention to equity, rather than simply to representation, and to intersectionality - how gender, race, social class and other identities intersect to produce disparate experiences and outcomes for individuals differently positioned in social systems. Gendered organizations theory helps to place these experiences and outcomes in a structural context. Iterative processes of organizational learning are postulated to account for these changes over time.\nOriginality/value: While many studies have examined ADVANCE projects' activities and outcomes, none have examined the premises and design of the ADVANCE program itself. This analysis offers insight into how the ADVANCE RfP has driven innovation and learning about transformative institutional change to advance gender equity in STEM.\n",
"role": "assistant"
}
] |
Teacher learning communities: the landscape of subject leadership
|
[
"School-based curriculum renewals",
"Subject leadership",
"Teacher learning communities"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Educational reform initiatives, both local and worldwide, have one mission to accomplish. They all aim to enhance school effectiveness. For the success of these initiatives, teachers play a pivotal role (Fullan, 1995; Lieberman and Miller, 1999; Hopkins, 2000). The traditional, transmission-oriented approach of helping teachers to reconceptualize their pedagogy through off-site professional development (PD) courses and seminars that take place in isolated moments in time is no longer perceived as viable and effective (Darling-Hammond, 1998; Lieberman, 1996). The \"deficit\" model which views teachers as empty vessels to be filled (Garmston, 1991) is neither considered appropriate. Instead, collaborative and practice-oriented renewal projects undertaken by teacher learning communities (TLCs) in schools that significantly involve teachers at all levels of planning and actioning are seen as effective in helping teachers to conceive and attempt alternative pedagogies, and to develop in their dual roles as learners and practitioners (Darling-Hammond and McLaughlin, 1995). This paper examines the ways in which TLCs were organized by subject leaders in secondary schools in Hong Kong for their potential in advancing teacher professional enhancement.\n2. PD and TLCs: PD has been defined in many ways. It can be seen as more than mere skill training, but as a process of culture building (Cochran-Smith and Lytle, 2001). It is most effective when based in schools and related to the daily activities of teachers and learners (McLaughlin and Zarrow, 2001). Based on this perspective, schools are transformed into communities of learners and inquiry (McLaughlin and Zarrow, 2001), whereby teacher professional learning becomes a collaborative process. Numerous studies show a noticeable impact of successful PD experiences on teachers' work (Darling-Hammond, 1999; Nelson, 1999; Wood and Bennett, 2000), student achievement and learning (Borko and Putnam, 1995; Darling-Hammond, 1999) and educational reforms (Chadbourne,1995; Walker and Cheong, 1996; Day, 2000; Morris et al., 2000; Van Driel et al., 2001; Villegas-Reimers, 2003). Conditions identified as facilitating teacher PD include opportunities for instructional expertise enhancement, leadership development in innovation and inquiry, respect for teacher autonomy, promotion of openness, collegiality and trust, flexible school infrastructure and supportive administration and resources (Ancess, 2001; Guskey, 1995).\n3. Subject leadership: functions and styles: Subject-based departments are widely perceived as the primary organizers of the professional lives of secondary school teachers, and that discipline identities and subject subcultures of high school teaching are critical contexts for defining shared work and building communities of practice (McLaughlin and Talbert, 2001). There is a growing body of literature describing the subjects as distinct and important contexts for teachers in the advancement of their professional careers (Siskin and Little, 1995). Subject leaders fulfil both leading professional and executive role functions (Hughes, 1985). In terms of professional roles, subject leaders provide professional guidance on curriculum planning, pioneering innovative pedagogy, organizing PD opportunities, sharing good practice and serving as role models of high standards of teaching. Externally they serve as spokespersons on professional matters for the subject departments in meetings with the schools' senior management team (SMT) and parents. In terms of their executive roles, they make decisions on teaching assignments, and resources allocation, monitor standards and delegate responsibilities and duties to members of their departments. While the creative and inspirational dimension of subject leader's role (Bennis, 1989) grants teachers sufficient autonomy to innovate, flourish and to generate change and development, the management and executive dimension emphasizes control, order and consistency (Lahtero and Risku, 2012). Subject leaders concerned with growth and development concentrate on the process of change, respect innovativeness and consensus building (Lahtero and Risku, 2012) while those concerned with administration and management tend to be preoccupied by setting planning goals, organizing structures and managing operations and controlling performances. In general leadership styles are classified as relatively open or closed, top-down and \"leader-centric\" (Bolden et al., 2009) or \"distributed\", \"democratic and participative\" (Leithwood et al., 2004), transactional or transformative (Blase and Anderson, 1995). Along similar lines summarized above, the ministry of education in Hong Kong where the current study was situated expects subject leaders in Hong Kong schools to: first, serve as a \"bridge\" between the school administrative personnel and teachers in subject departments; second, encourage a culture of open discussion and mutual support for teacher PD through peer observation, team teaching and reflective dialogues; and third, lead the departments in the process of developing school-based curriculum taking into account learners' interests, needs and abilities (Curriculum Development Council, 2004). The study reported in this paper aimed to examine the ways in which subject leaders conceptualized and organized the learning communities that were formed within their departments for pioneering innovative strategies for enhancing language learning and teaching.\n4. The study: Since the 1990s, Hong Kong schools have been devoting immense efforts to improve student learning outcomes and instructional practices through school-based curriculum renewal projects undertaken by TLCs formed within subject departments in Hong Kong schools. Such initiatives were in the large part funded by the Education Bureau which, for instance, set aside more than HK$1.4 billion for funding school-based English enhancement schemes undertaken by secondary schools in Hong Kong. While research studies conducted in the Western world show that reflective and collaborative inquiries within professional communities have helped to improve classroom practices and teacher development (e.g. Hord, 1997), yet little is known about the management practices of subject leaders in shaping the internal cultural conditions within their departments (Brown et al., 2000). In order to understand the ways in which TLCs were organized and managed in secondary schools in Hong Kong, a qualitative cross-site case study was conducted.\n5. Findings: Based on the descriptions of the teachers and subject leaders, Table I has been drawn up to show the roles that the teachers reported they had performed in the projects.\n6. Discussions: The leadership styles adopted in the three teams reflect the dichotomy in the level of importance that subject leaders attached to their concern for maintaining harmonious relationships with subordinates, promoting professional empowerment and guaranteeing uniformity and accountability. While the PEW subject leader clearly prioritized the goal of maintaining a congenial atmosphere over the concern for task achievement on the one end of continuum of leadership approaches, the LoA subject leader highlighted the importance of quality control and accountability for organizational goals achievement on the other end of the spectrum. The former displays a high concern for the comfort of the members of his department at the expense of accountability and teachers' constructive involvement in their own professional renewals, while the latter clearly demonstrated a techno-instrumentalist view on educational management and a lower concern for individual teachers' satisfaction and sense of ownership of the innovative pedagogy. The fact that the lack of teacher involvement in designing key tasks in both the PEW and LoA projects resulted in the weakening of teacher ownership of the initiatives for sustainability and teacher capacity for professional renewals and self-regeneration.\n7. Conclusions: This paper presents the leadership practices within three TLCs. While one of the subject leaders (i.e. LoA SL) operated as a \"lone chef\" (Harris, 2004), displaying a high concern for accomplishment of tasks through control and prescription, another subject leader (i.e. PEW SL) demonstrated an exclusive concern for harmony and relationships among his team members, and thus undermining the enforcement of equitable and quality member contribution for change and renewals. A well-balanced focus on concerns for task and for growth and professional empowerment was embedded in the leadership practices of the subject leader of TBI Team, who ensured participative and interactive learning and leadership evolvement. While the subject leader of LoA Team may not be able to forge a strong sense of ownership and a culture of creative innovations, by far he succeeded in generating standardized practices which were seen as important guidelines for novice members of the LoA, though the sustainability of the project efforts was questionable. Conditions conducive to collective, sustainable, self-generating and energized reform endeavours thrived under the leadership of TBI. The collaborative practice that evolved in the TBI project was a collective process that was both engaging and enriching. It also reflects the members' increased understanding and knowledge representations about the task-based principles that guided their innovative practices. Their collective success demonstrates a balance between the production of deliverables such as instructional plans and intangible outcomes embodied in deep learning experiences for the community members. Integrating learning communities into the work lives of school teachers helps re-culture the teaching profession, by \"changing the ethos of teaching from individualism to collaboration, from conservatism to innovation\" (McLaughlin and Talbert, 2001, p. 125). There were explicit efforts made by the subject leader of the TBI Team to formalize common structure and conditions that facilitate collaborative inquiries and reflective peer reviews and dialogues among teachers about their practices, which enhance their professional autonomy and personal investment in renewing and sustaining those practices. To enable TLCs to grow into collaborative learning and knowledge sharing communities, there should be adequate infrastructure for team learning (Darling-Hammond, 1998; Hargreaves, 1997); opportunities for members to play new roles, e.g. in curriculum leadership, and to create and share stories of individual and community successes. The following key elements identified by Lave and Wenger (1991) and Wenger (1998) best summarize the features that define productive TLCs, which include mutual engagement, joint enterprises and shared repertoires of practice. The cases reported in this paper illustrate the landscape of leadership practices of subject teachers operating in the Hong Kong context. It is recommended that subject leadership training should encompass reflective discourse on epistemological beliefs and assumptions on teacher learning and development, and leadership orientations that shape collaborative professional renewal practices. Further research should also be conducted to examine the contextual variables, and factors influencing the choice of leadership approaches within PLCs, and their impacts on school improvement.\n",
"role": "user"
},
{
"content": "Purpose: Teacher learning communities (TLCs) formed within subject departments are conceptualized as reform platform for facilitating school improvement and teacher development. The purpose of this paper is to examine the ways in which three TLCs were structured and managed for generating change capability in secondary schools in Hong Kong.\nDesign/methodology/approach: A review of the literature on the contribution of learning communities to teacher professional development, and the functions and styles of subject leadership provides the theoretical background of the study. Participant observations and semi-structured interviews were employed as the main instruments for the investigation into the potentials of TLCs for enhancing professional renewals. Narrative accounts were obtained from a total of 21 teachers (including three subject leaders) on the nature and process of their participation in the renewal projects.\nFindings: The findings of the interview study reveal the landscape of subject leadership orientations that shaped the collaborative practices, power relations and knowledge representations within TLCs. Variability is noted in the subject leaders' concerns for harmonious relationship, executive control, professional growth and leadership empowerment. The spectrum of knowledge structure in these communities is marked by a hierarchical relationship on one end, and a heterarchical relationship on the other.\nResearch limitations/implications: This paper portrays the dynamics within three TLCs, and depicts a continuum of three leadership styles. The literature on subject leadership in TLCs could be enriched with further investigations into the contextual variables and sources of influence that affect the choice of leadership orientations of subject leaders who grapple with a balance between managerial imperative for control and accountability and a learning imperative for participative explorations that satisfy teachers' aspirations for autonomy and creative investments in their own professional enhancement.\nPractical implications: Subject leadership training should emcompass reflective discourse on epistemological beliefs and assumptions on teacher learning and development, and leadership orientations that shape collaborative professional renewal practices.\nOriginality/value: While research studies conducted in the Western world show that reflective and collaborative inquiries within professional communities have helped to improve classroom practices and teacher development (e.g. Hord, 1997), yet little is known about the management practices of subject leaders that shape the internal cultural conditions, power relations and knowledge representation within their departments in the context of Hong Kong.\n",
"role": "assistant"
}
] |
Technology readiness as moderator for construction company performance
|
[
"Information system quality",
"Technology readiness",
"Information systems",
"Organizational performance",
"Construction industry",
"Taiwan"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: With the small area and dense population of Taiwan, the property market has been competitive for several years. New products are continuously replacing the old. Property products possess more than just the single value of a \"living space\" (Xu et al., 2010). Under the future market segmentation trend of Taiwan property and the challenge of customized consumer requirements, the construction industry has become one of the most competitive markets (Lowe, 2011). The brand management and product planning of the construction industry are key factors of the land value. Providing products with sustainability against inflation is an imperative measure (Warren-Myers, 2012) to meet future challenges. In addition, the rapid development of information and internet techonologies has reduced the lifecycle of products (Ghobakhloo et al., 2011). Therefore, the construction industry must focus on developing core businesses and technologies and expanding the target market to increase the performance and efficiency of businesses. However, effective and rapid product design activities require a divergent, inter-functional, and cooperative product development process (El-Saboni et al., 2009; Rezgui et al., 2010), which is likely to result in data transmission complexity in the product design process. To resolve the increasing complexity during product development, the construction industry must determine the application of information technology (IT) to the main product development process (Assaf et al., 2010; Gurumurthy and Kodali, 2012). Nevertheless, Alkhattabi et al. (2011) contended that although advanced technology may enable employees to acquire information rapidly and simply, the information system quality (ISQ) cannot be guaranteed (Lu et al., 2011). ISQ and services are the primary indices for evaluating quality, such as technology specifications and functions (Lam et al., 2010). Improving quality improves competitiveness.Previous studies have indicated that the construction industry introduced information and communication technology to strengthen the effective spread of internal knowledge and improve the operating performance (Love et al., 2011). However, despite the significant investment in the information infrastructure to improve knowledge transfer within the organization, certain enterprises did not acquire their deserved performance results (Lin and Tseng, 2005). Stoel and Muhanna (2011) contended that investing only in IT improvement cannot guarantee the inter-employee knowledge transfer or improve organizational performance (OP). In other words, successful IT introduction benefits knowledge sharing. Nevertheless, whether the enterprise operating performance can be improved depends on the quality of coordination and integration of organization members, science and technology, and intra-enterprise technologies (Stojanovic et al., 2010). Simultaneously, the interaction and experience of employees among various information systems influences users' technology readiness (TR) (Sun and Poole, 2010). If enterprises possess higher TR when introducing new technologies, effective communication and sharing among different organizations and functional departments can be achieved.To improve customer service quality and satisfy customized requirements, those in the construction business generally use computer software (Chong et al., 2011b) and new technologies to assist design and execution and promote the completion of the organization's operating goal. However, whether organization members possess the required technology skills and knowledge for the workplace and can proficiently use the IT system facilities of the organization have a certain degree of influence on the new product design and development process efficiency. Thus, this study aims to understand whether the extrinsic variable (the organization ISQ) and the intrinsic variable (personal TR of employees) are related to OP improvement for study respondents. Moreover, while past studies (Haug et al., 2011; Koh et al., 2008; Lin and Hsieh, 2007; Maples et al., 2009; Walczuch et al., 2007) have largely focused on the user's TR in a technology-supported environment as an antecedent factor of successful adoption of technology innovation, this study, in particularly, explores whether the diversified TR of employees of the construction industry has moderating effects on the relationship between ISQ and OP based on the goals of a high performance and service quality for the construction industry in Taiwan. The results are expected to shed lights on how low TR and high TR of employees may influence OP in a technology-supported work environmeent, that can be further used as a reference for related construction enterprise operators and management to conduct subsequent organization development and reform.\n2 Literature review: 2.1 ISQ\n3 Methods: This is a quantitative research using a survey questionnaire to collect data. The research structure of this study is shown in Figure 1. The following illustrates the measures and the sample of the present study.3.1 Measures\n4 Data analysis result: 4.1 Reliability and validity analysis and correlation analysis\n5 Conclusion and suggestions: 5.1 Result discussion and management implications\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to understand the effect of information system quality (ISQ) and technology readiness (TR) on organizational performance (OP), from the background that the construction industry pursues higher performance and service quality.\nDesign/methodology/approach: - A survey questionnaire was used. The hierarchical regression analysis was employed to analyze the 545 valid questionnaires collected (valid return rate: 54.5 percent) from construction industry employees in Taiwan.\nFindings: - The results indicate that ISQ directly improves OP. Particularly, TR has a moderating effect on the relationship between ISQ and OP. TR can enhance the positive influence ISQ has on OP.\nResearch limitations/implications: - Data sources for this study were restricted by the respondents' cognition and experience regarding ISQ and OP. Biases may exist among respondents. In addition, the collected data were related to the Taiwan construction industry, which may differ from operating environments and characteristics of other industries and cities. The management implications are explained in the Conclusion.\nPractical implications: - This study provides meaningful information for managers of the construction industry. Management can use TR to influence the relationship between ISQ and OP and to further improve OP to meet future challenges.\nOriginality/value: - Under the trend of market segmentation and the challenge of customized consumer requirements, brand management and product planning of the construction industry have become key factors of land value. The results of this study indicate that organization ISQ improves OP and personal TR is the moderator that influences the relationship between ISQ and OP to improve OP.\n",
"role": "assistant"
}
] |
Readability, governance and performance: a test of the obfuscation hypothesis in Qatari listed firms
|
[
"Agency theory",
"Corporate performance",
"Legitimacy theory",
"Readability"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: With the increase of required information in annual reports related to a firm's social responsibility, risk management and sustainability, accounting narratives are becoming increasingly important in external reporting (Clatworthy and Jones, 2001). Narrative disclosures (NDs) have grown in importance over the past few years (Jones and Smith, 2014). They incorporate rich content describing, discussing and evaluating the firm's financial and non-financial performance and its competitive position. NDs provide management with opportunities to explain and reveal some of the underlying reasons behind the firm's performance, productivity, comparative advantages and competitiveness. In fact, NDs set the context for the annual reports and therefore must be effectively communicated to the firm's stakeholders. Smith and Taffler (1992b) argue that NDs must be an effective medium of communication so that users of annual reports correctly and precisely understand the intended message sent by the preparers of these reports. Therefore, effective communication is an essential element if NDs are to be understood.\n2. Literature review: An important aspect of annual report disclosures, including narratives, is the need for an effective readable communication in which there is a shared meaning between the producers and users of the information (Hrasky and Smith, 2008; Jones and Smith, 2014). Readability is an important dimension of the quality of communication between the authors of financial reporting and the readers (Courtis, 2004; Linsley and Lawrence, 2007; Rennekamp, 2012; Tan et al., 2014). Jones and Smith (2014) argue that readability \"measures the textual difficulty of a passage\".\n3. Theoretical framework: There are different theories explaining corporate managers' motives behind the reading difficulty of annual reports and obfuscation. For example, the Agency Theory and the Signaling Theory explain the corporation's disclosure strategy in terms of the economic consequences on corporation performance (Watts and Zimmerman, 1978, 1990). In this regard, Jensen and Meckling (1976) argue that conflicts of interest always exist between the owners (principals) and their managers (agents), which results in agency costs such as equity and debt. To mitigate this problem, managers produce more non-financial information, i.e. NDs, and make such disclosures easy to understand for different stakeholders including the firm's owners and debtors. The ease to read, i.e. readability, of annual reports is required as it completes the firm's financial information with more details for decision-making purposes. Accordingly, stakeholders will be able to better understand the annual reports and the impact of managers' decisions on the firm's operations.\n4. Qatar \"corporate governance\" institutional context: Qatar is one of the six countries constituting the Gulf Cooperation Council (GCC) countries (i.e. Bahrain, Oman, Qatar, Saudi Arabia, the UAE and Kuwait). The growth of this region has been predominantly from revenues generated through oil and gas reserves. Yet, over the past decade, the region has moved towards diversifying its sources of revenue from the construction, retail, manufacturing and tourism sectors (Dalwai et al., 2015). Although Gulf region countries share similar characteristics in terms of culture, customs, religion and language, they are not homogeneous in terms of their levels of corporate governance and institutional development (Baydoun et al., 2013; Shehata, 2015; Hassan, 2015). Each country has a unique mixture of intra-country specific legal, economic, social and political institutions (Hassan, 2008, 2014, 2015) and Qatar is no exception. Qatar has witnessed a remarkable socio-economic growth over the past few years, and therefore, the country has become a key focus for personal and institutional investors, particularly with the approach of the World Cup in 2022.\n5. Hypotheses development: 5.1 Readability and corporate profitability\n6. Methodology: 6.1 Sample\n7. Data analysis and results: 7.1 Descriptive statistics and correlations\n8. Conclusions, limitations and future research: This study investigates the reading difficulty of the \"text\" incorporated in the annual reports. It focuses on the readability and obfuscation of the annual reports published by Qatari listed firms. To achieve reliable results for the readability-firm financial performance relationship, the study controls for the firm's competitive position, governance structure, foreign ownership, size and age. The study found that Qatari listed firms' annual reports readability scores are rated between difficult and very difficult (the Flesch Reading Ease scores range between 16.92 and 46.08 with an average of 26.28). As Qatar is a country with a small population and many potential investors, this level of text complexity is high. The Gulf States Countries, and Qatar is no exception, are populated by different skilled workers from different nationalities and with different backgrounds. Those workers, i.e. expats, participate in business processes including the preparation of annual reports. Hence, Qatari listed firms' annual reports are prepared by different individuals with different backgrounds and consequently the level of text complexity, style and tone could differ remarkably if these annual reports are prepared by a homogenous group of people sharing the same background (Toit, 2017, p. 636).\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to examine the relationship between the readability of annual reports and corporate performance in Qatari listed firms while controlling for a firm's competitive position, governance structure and specific features such as size, age and industry type.\nDesign/methodology/approach: This study relies on both agency theory and legitimacy theory to develop testable hypotheses. It uses a sample of 126 firm-year listed companies in the Qatar Stock Exchange to test obfuscation in the annual reports through examining the association between the readability of Narrative Disclosures (NDs) and corporate profitability, financial risk and agency costs for the period from 2014-2016.\nFindings: The findings show that firms with higher annual report readability are more profitable and have lower agency costs, which is an indication of the existence of \"obfuscation.\" Qatari firms may use narrative complexity as a disclosure strategy to enhance their image and consequently maintain their social legitimacy.\nResearch limitations/implications: Although the study findings suffer from limited global generalization, they can be generalized across Gulf Cooperation Council countries. Thus, future cross-country research is encouraged.\nPractical implications: The findings encourage Qatari policymakers to instate a policy for \"Plain English\" writing to make NDs easy to read by international investors.\nOriginality/value: This study is one of very few studies that examines the readability of annual reports in emerging market economies, i.e. Qatar. The study contributes to the paucity of research that examines English-written annual reports in non-English speaking countries.\n",
"role": "assistant"
}
] |
Mind the gap: Sport management education and employability auto-ethnographical analysis of sport management education and the sports fitness industry
|
[
"Auto-ethnography",
"Sport management education",
"Sport participation management",
"Employability inspired teaching"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Recent publications in the area of sports graduate capabilities and competencies (Baker et al., 2017) and expectations of employability skills in sport and recreation (Tsitskari et al., 2017) mirrored the authors experience having been previously embedded in the sport industry and more recently teaching in the university sector in the field of sport management. The existence of, or potential for, a \"gap\" between sport management education (SME) and the sport centre managers' requirements of graduate employees is by virtue of the system by which higher education (HE) and graduate employment is aligned. The current research project described here is predicated on observations (Alvesson and Spicer, 2016) describing an early career manager telling an intern the basis of a really good report was \"One or two impressive PowerPoint slides\". Surprisingly not a well-constructed argument based on well-researched and evidence-based specialist knowledge, widely acknowledged as the essence of degree education.\nLiterature: Employability and HE\nDesign/methodology/approach: The next section considers the methodology employed within this research project.\nFindings: This section presents the key findings of the study focussed on identifying the \"distance\" between SME and the occupational demand of sport and leisure centre graduate employment, based on the auto-ethnographical findings of the author and the interview responses of the sport centre managers. The meaning has been extracted from the narratives of the interviews.\nDiscussion: This paper demonstrates the value of using the long term auto-ethnographical experience of industry executive leader, and triangulating with qualitative data from a highly targeted sector, using a \"quick snapshot\" interview technique to gain insight into skills, knowledge and competency insight perspectives of managers.\nConclusions: RQ1. When studying a specialist knowledge degree, i.e. sport management, in respect of gaining specific sports industry graduate employment, what is the optimum balance between subject-based education, generic management education and employability competencies?\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to evaluate the position of sport management education (SME) in relation to employment in the sport fitness industry and if a shift towards a greater emphasis on business and management module teaching is justified or conversely a greater utilisation of the rich data being generated by sport management researchers can be applied to the vocational development of sport management students.\nDesign/methodology/approach: The study was conducted in two parts. Part 1 consisted of interviews with the author by an interviewer to explore their industry experiences, the author's experience of entrepreneurial sport industry business and intrapreneurial sport teaching. The resultant data have been distilled into a series of paradigms, including employing sport industry staff and teaching sport management. Part 2 triangulates part 1 with the managerial experience of sports and leisure centre managers to develop a qualitative study. Sports and leisure centre managers were interviewed regarding candidate and employee competences judged as extreme examples of sport management knowledge and understanding.\nFindings: Four themes emerged, such as alignment, underpinning subject-specific knowledge, a voice from the sport industry in developing SME and postgraduate opportunities for work-based learning.\nResearch limitations/implications: A small-scale study that requires further research in each of the three areas: sport spectator, sport participation and elite performer management.\nPractical implications: The sport centre interview instrument, with some refinements, could form the basis of an improved system to gain rich data from industry members of higher education business advisory groups. The developed data collection tool could increase the effective collection of data from a wider cohort.\nOriginality/value: The approach has produced an adapted category of pedagogy. Employability inspired teaching (EiT) is an original terminology describing the subject-specific content that leads to improved opportunities for subject (sport) industry employment.\n",
"role": "assistant"
}
] |
A metaphor for knowledge management: explicit islands in a tacit sea
|
[
"Knowledge management",
"Tacit information",
"Explicit information",
"Generation and dissemination of information"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Staying competitive is an ongoing battle for all organizations in today's business environment. Developing a competitive advantage and controlling that advantage is crucial to success. Many believe that effective knowledge sharing (Nevis et al., 1995; Davenport and Prusak, 1997; Drucker, 1998; Chow et al., 2000; Lin and Lee, 2004) is an important way to leverage core competencies and gain a competitive advantage (Gold et al., 2001). Drucker (1995) states that long-term competitive advantage for any organization is derived from access to some form of knowledge that it can exploited (Beckett et al., 2000). Therefore, it is crucial that knowledge is regarded as a main production variable (Sotirakou and Zeppou, 2004). Society is in a knowledge revolution and it requires that organizations capitalize on all available assets including knowledge assets (Tranfield et al., 2004).Recent studies have expressed considerable interest in knowledge-sharing practices. For example, Macneil (2001) suggested that senior managers could contribute significantly to the development of core competencies and skills through the establishment of a knowledge-sharing environment in which employees are encouraged to apply their explicit and tacit knowledge to problem-solving situations. Hislop (2003) investigated the impact of organizational commitment on the attitudes to knowledge sharing and related behaviors (Lin and Lee, 2004).The act of sharing knowledge requires that one possesses the knowledge or knows where to obtain the knowledge. To do this, an organization must manage its knowledge. Knowledge management is a method for simplifying and improving the process of sharing, distributing, creating, and understanding company knowledge (Nonaka and Konno, 1998). Knowledge management focuses on systematic and innovative methods, practices, and tools for managing the generation, acquisition, exchange, protection, distribution, and utilization of knowledge, intellectual capital, and intangible assets (Montana, 2000; Lin and Lee, 2004)In the next section, the authors briefly discuss the different viewpoints on knowledge. Following this, the authors examine the Knowledge Hierarchy paradigm with particular emphasis on its limitations. The authors then develop the explicit islands on a tacit sea (EITS) metaphor. This is followed by an examination of how the EITS metaphor fits with other existing KM research. The paper concludes with directions for further research.\nKnowledge: The role of knowledge management is increasingly significant: \"Knowledge is neither data nor information, though it is related to both, and the differences between these terms are often a matter of degree\" (Walters, 2000).Philosophers (Aristotle, 1928; Descartes, 1911; Locke, 1987; Kant, 1965; Hegel, 1977; Wittgenstein, 1958; Heidegger, 1962, and Merlau-Ponty, 1962) have debated the definition of knowledge for ages (Marr et al., 2003). Plato defined the concept of knowledge as \"justified true belief\" (Gupta et al., 2003). Post-modernists challenge that knowledge is a fundamental truth while cognitive anthropologist, ethnomethodologiests, symbologists and structuralists have demonstrated the significance of situated skills and pragmatic knowledge (Assudani, 2005).A common theme in the KM literature is that data is combined to create information, and information is combined to create knowledge (Alavi and Leidner, 2001). There is a consensus that data are discrete facts. Yet after that, consensus is lacking, so the authors will provide a review of the many definitions for information and knowledge. Researchers define information as data with special relevance and purpose (Drucker, 1995), data that makes a difference (King, 1993), data in context (Galup et al. 2002), and a result of analyzing and interpreting data that carries meaning (Bourdreau and Couillard, 1999). There is even less consensus about the meaning of knowledge. Researchers define knowledge as the power to act and make decisions (Kantner, 1999), information in context, together with an understanding of how to use it (Davenport and Prusak, 1997), professional expertise appropriate for the domain (Bourdreau and Couillard, 1999), things that are held to be true and drive people to action (Bourdreau and Couillard, 1999), justified personal belief that increases an individual's capacity to take effective action (Alavi and Leidner, 1999), information that has been authenticated and thought to be true (Vance, 1997), integrated information in context (Galup et al., 2002), information made actionable (Mahlitta, 1996), and information made actionable in a way that adds value to the enterprise (Vail, 1999).Knowledge is often divided into two categories - explicit (knowledge for which a person is able to make available for inspection) or tacit (knowledge for which a person is unable to make available for inspection) (Polyani, 1958; Davenport and Prusak, 1997). Another definition comes from Shin et al. (2001), who state, \"Tacit knowledge resides in the individual's experience and action. Explicit knowledge is codified and communicated.\" Explicit knowledge \"can be articulated in formal language including grammatical statements, mathematical expressions, specifications, manuals, and so forth\" while \"tacit knowledge is hard to articulate with formal language. It is personal knowledge embedded in individual experience and involves intangible factors such as personal belief, perspective, and the value system\" (Nonaka and Takeuchi, 1995). While tacit knowledge may be in the mind of the expert, contact information for the expert can be stored explicitly in a knowledge directory or \"yellow pages of experts.\"\nThe knowledge hierarchy: The most common paradigm in the KM literature is of a knowledge hierarchy (Nissen et al., 2000; Davenport and Prusak, 1997), as depicted in Figure 1. In this paradigm, each level in the hierarchy builds on the one below it, so data are required to create information, and information is required to create knowledge. The knowledge hierarchy depicts the conventional concept of knowledge transformations, where data is transformed into information, and information is transformed into knowledge.Variations on this central theme include Tuomi (2000), who proposes an inverted hierarchy. His position is that knowledge is required to represent information, which must be done to store data. Nissen (2002) extends this concept with a model containing two hierarchies. One hierarchy models the view of the knowledge seeker, where the second hierarchy is inverted and represents the view of the knowledge creator. From the creator's perspective, knowledge is necessary to create information, which in turn is necessary to create data. From the seeker's perspective, data is placed in context to create information, and information that becomes actionable is knowledge.The knowledge hierarchy model is predictive, as it indicates the relative action ability and volume of the levels. Using the vertical axis to represent action ability and the horizontal axis to represent volume, it can be seen that knowledge is the most actionable level but the rarest, where data is the least actionable level but has the greatest volume (Nissen, 2002).Limitations of the knowledge hierarchy\nExplicit islands in a tacit sea (EITS) metaphor: In this section, the authors develop an alternative to the knowledge hierarchy. This alternative is presented as a metaphor - explicit islands in a tacit sea (EITS). The value of metaphor is described eloquently by Nonaka (1991) in his seminal paper on the knowledge-creating company as follows:It is a way for individuals grounded in different contexts and with different experiences to understand something intuitively through the use of imagination and symbols without the need for analysis or generalization. Through metaphors, people put together what they know in new ways and begin to express what they know but cannot yet say. As such, metaphor is highly effective in fostering direct commitment to the creative process in the early stages of knowledge creation.The EITS metaphor is graphically depicted in Figure 2. Each component of this figure will be first addressed separately in the following sub-sections.Explicit knowledge - the islands\nDiscussion: To validate the EITS metaphor, the authors will examine its fit with other existing research streams in KM that have not yet been discussed in this paper.Supporting the goals of knowledge management\nSummary: While the knowledge hierarchy is a well-known paradigm in KM, other research indicates flaws in the paradigm. Not all researchers are in agreement with the one-way transformation from data to information to knowledge. Tuomi (2000) argues for an inverted tacit hierarchy, where Speigler (2000) argues for a double hierarchy for explicit systems. Additionally, taxonomies such as Earl (2001) include in KM not only explicit and tacit systems, but research streams such as behavioral aspects, organizational learning, and strategic knowledge management. The EITS metaphor integrates these many diverse theories into a more holistic representation of KM.The EITS metaphor extends the theories of knowledge transformations while integrating the knowledge hierarchy, reverse hierarchy, and double hierarchy paradigms because none is mutually exclusive. It extends the theories of knowledge transformations in two ways data to knowledge in explicit systems, and knowledge to data. In explicit systems, data may be transformed directly into knowledge. In both explicit and tacit systems, knowledge from one domain becomes data in another. Table I shows the transformations supported by the paradigms that have been discussed.The EITS metaphor is a predictor for action ability, verification, and volume of knowledge. Knowledge is the top island, indicating that it is the most actionable and highly verified source, followed by information on the middle island, and data on the bottom island. Relative volume is indicated by the size of the islands; the data island is largest, the information island is in the middle, and the knowledge island is the smallest. Two aspects of Figure 2, the relative size of the islands and the size of the sea, are not to scale. If the ratio of data to knowledge were accurate, the knowledge island would be too small to be visible. It would be an interesting research challenge to measure the ratio between explicit and tacit knowledge, but lacking such research, the authors guess that there is a larger volume of tacit knowledge than explicit knowledge, indicating that the sea should be larger.The EITS metaphor presents a more holistic view of KM than the knowledge hierarchy paradigm. It is hoped that this research will provide a vehicle for further research into knowledge transformations, flows, and distribution. The EITS metaphor extends the knowledge hierarchy by the addition of distribution systems, behavioral aspects, organizational learning, and strategic knowledge management. The bridges represent the linkages between explicit systems, and the linkages between tacit users and explicit systems are represented by the shores. The primarily tacit research streams of KM are located on the beach.\nDirections for further research: The EITS metaphor recognizes six knowledge transformations. One interesting research question is how the transformations are performed and for what purpose. One possibility is that the knowledge hierarchy represents the perspective of a knowledge seeker, who uses data to create information and information to create knowledge. The inverted hierarchy, on the other hand, may represent the view of the knowledge creator, who uses his knowledge about the domain to create information structures necessary to hold data. The knowledge to data transformation occurs when a knowledge seeker places knowledge in another context, where it(longer has its original meaning and becomes data for the new task. The transformations of data to tacit knowledge, if such transformations occur, provide another interesting research topic.More research is also needed into the appropriate mechanisms for sharing and distributing knowledge. KM is hindered in practice because of the difficulty of knowing what knowledge is available, and being able to access the knowledge. The EITS metaphor prescribes an EKD as a portal to the knowledge stored by the firm, but other forms of sharing and distributing knowledge are necessary to empower knowledge seekers in the firm. For example, Allee (1997) sub-divides the process of knowledge sharing into socializing, communicating, distributing, teaching, and transferring.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to examine the current thoughts on knowledge management (KM) and to develop a metaphor to combine these thoughts in a new way that effectively conveys the different types of knowledge and ways of managing it.\nDesign/methodology/approach: - The literature on the transition of data to knowledge is reviewed. A popular paradigm in KM states that data are integrated to create information and information is integrated to create knowledge. This paradigm is represented as a pyramid-shaped hierarchy with knowledge at the top, information in the middle, and data on the bottom. Why this paradigm is a simplistic and limited view of knowledge and KM is discussed.\nFindings: - The \"explicit islands in a tacit sea (EITS)\" metaphor is explained and discussed in the context of knowledge and knowledge management (KM).\nPractical implications: - The EITS metaphor more accurately and completely describes knowledge in the context of KM. The practical implications of this metaphor are its flexibility and transparency of the transitional actions that affect the evolution of data to knowledge.\nOriginality/value: - The EITS metaphor is an evolution of the prevailing frameworks and removes the apparent limitations in earlier frameworks. The paper provides a paradigm shift in the discussion of KM.\n",
"role": "assistant"
}
] |
Quality improvement in hospice settings: perceptions of leaders
|
[
"Leadership",
"Palliative care",
"Hospice",
"Quality improvement (QI)"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Terminally ill patients expect high-quality and personalized care at their end of life. Rather than seeking aggressive treatment procedures, these patients prefer better pain management strategies, spiritual care, palliative care and other options that would allow them to die peacefully. Evidence suggests that hospice provides a range of services to deal with the end of life care to both patients and families (Austin and Fleisher, 2003). Implementation of quality improvement (QI) and process improvement methods is needed to meet the needs of patients and family members as they face a difficult time in their lives (Burstin and Johnson, 2017). Hospice was recognized as the primary source available for patients to help them achieve better end of life care (Noe and Smith, 2012). Hospice organizations provide supportive and palliative care to patients who seek comfort in their end of life stage (Noe and Smith, 2012). Uncoordinated processes, faulty communication and lack of standardization can result in serious problems for patients and families seeking end of life care. This can make situations even worse for terminally ill patients as they feel \"left out\" and not in control of their end-of-life decision making (DeVore, 2017). The Institute of Medicine (IOM) recognized end-of-life care as a national priority area for application of QI strategies (Schenck et al., 2014). It is important to note that leaders have a significant role in implementation of QI methods needed to streamline current systems of care for patients in hospice care organizations. Support from leadership may encourage clinical and administrative staff to implement processes that focuses on improving quality of life for frail adults and persons nearing end-of-life.\nBackground: There has been a significant increase in number of patients who require high quality, safe and efficient systems of care so they can attain physical, psychological and social well-being as they approach end-of-life (Schenck et al., 2014). Hospice care also provides a supportive environment to the families who may need assistance as they prepare for the death of their loved ones[1]. Improper information exchange between healthcare facilities, lack of coordination between providers and ineffective processes may create sudden changes in care plans which could lead to the need for large number of meetings, increased communications between providers, patient software updates, manual updates and paper work. This could have severe physical or psychological impact on patients and families who are already going through difficult times in their lives. Effective integration of QI methods may enable hospice care to achieve significant gains in quality, safety and efficiency of patient services (Kelby, 2011). This highlights the importance of healthcare leaders designing efficient processes that support the care of the patient at end-of-life.\nPurpose statement: The purpose of this qualitative study was to examine leaders' perspectives on QI application in palliative and hospice care organizations. In addition, to determine how leaders define quality in patient care, clinical and administrative processes. Leaders must understand their roles with QI initiatives in healthcare settings, and QI strategies commonly used in healthcare facilities. Finally, to examine leadership support and education that is needed to implement these strategies, and challenges surrounding implementation of QI strategies.\nMethods: Study participants\nResults: After completing interviews with seven participants, five themes were developed through content data analysis: patient-centered care, continuous QI, leadership involvement and commitment, communication as a foundation for QI and perceived barriers.\nDiscussion: This research aimed to explore and examine perceptions of leaders toward implementation of QI strategies in hospice and palliative care settings. We found that leaders in a hospice care organization placed high importance on quality in patient care processes and worked consistently to provide resources so employees can succeed in their current roles. Participants felt that efforts were made to pay attention to needs of patients/families, collect feedback, regularly communicate and enhance methods to provide high-quality services.\nConclusion: The objective of this study was to examine leaders' perspectives on QI application in palliative and hospice care organizations, the importance of QI initiatives, common strategies used and QI project impacts. QI processes are needed to provide safe, efficient and quality care for terminally ill patients. There is an identified need for continuous QI, with staff involvement, that is supported by committed leadership. It is essential that leaders communicate and fully support staff for successful implementation of QI projects.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to explore and examine attitudes and perceptions of leaders on application of quality improvement (QI) strategies in a palliative and hospice care organization.\nDesign/methodology/approach: This study employed qualitative research methodology where leaders working in a hospice and palliative care organization were invited to participate in 45-60-min-long semi-structured interview. Interviews were recorded and transcribed verbatim. Qualitative content analysis was utilized to analyze the data collected during participant interviews.\nFindings: Seven leaders participated in the interviews. Five themes were developed from data analysis: patient-centered care; continuous QI; leadership involvement and commitment; communication as a foundation for QI; and perceived barriers. Data analysis suggests that use of QI approach in palliative and hospice care enhances the quality of care provided for patients, and can help improve patient satisfaction.\nPractical implications: Because there is a paucity of research on implementation of QI strategies in hospice and palliative care settings, this research can have wide practical implications. This research can provide useful practical tips to leaders as they work on implementing QI projects in their organization.\nOriginality/value: This manuscript can be of value to leaders, administrators and academicians who are interested in applying QI principles to healthcare processes especially in palliative and hospice care settings. Ability to work with others, solid communication and involvement of employees from all levels can help in streamlining current systems of care.\n",
"role": "assistant"
}
] |
IFRS and value relevance: evidence based on Canadian adoption
|
[
"IFRS",
"Canadian GAAP",
"Value relevance"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: This study uses Canadian data to investigate whether mandatory adoption of International Financial Reporting Standards (IFRS) affected the value relevance of accounting information reported by Canadian companies. Prior literature provides evidence of changes in the relevance of accounting numbers over time even without major changes in the accounting standards applied. Although the literature yields mixed results on trends in value relevance, the preponderance of evidence suggests that the value relevance of accounting information has declined over time (see Collins et al., 1997; Francis and Schipper, 1999; Lev and Zarowin, 1999; Core et al., 2003; Dontoh et al., 2004).\n2. Literature review and hypothesis development: 2.1. Theoretical framework\n3. Research methodology: 3.1. Research design\n4. Empirical results: 4.1. Descriptive statistics\n5. Conclusions: On January 1, 2011, Canada became the first G7 nation outside the European countries to join the community of nations that permit or require IFRS for financial reporting by mandating all publicly accountable enterprises with the exception of certain financial institutions and rate-regulated enterprises to adopt IFRS.We use three types of measures of value relevance to assess IFRS effects: first, the adjusted R2 of regressions of price on book value per share and EPS; second, the adjusted R2 of regressions of returns on EPS and changes in EPS; and third, an incremental association study. We adopt a multiple model approach that includes a book value and earnings Ohlson model, and a modified B\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to investigate whether financial information prepared and disclosed under International Financial Reporting Standards (IFRS) has incremental value relevance vs information prepared under generally accepted accounting principles (GAAP) in Canada.\nDesign/methodology/approach: The authors employ a difference in differences methodology and estimate value relevance using: first, the adjusted R2 of regressions of stock price on book value and earnings; second, the adjusted R2 of regressions of stock returns on earnings and changes in earnings; and third, a time series incremental association return estimation. The authors use multiple models including a model similar to the Ohlson (1995) model and a modified Balachandran and Mohanram (2011) model to investigate value relevance in the period 2008-2013.\nFindings: The authors provide empirical evidence, based on unique Canadian environment, that accounting information prepared and disclosed under IFRS exhibits higher price and returns value relevance than accounting information prepared previously under local GAAP. Sensitivity analyses and yearly trends regressions produce collaborating evidence.\nOriginality/value: The study provides early empirical evidence that value relevance increases in mandatory IFRS adoption, based on unique Canadian adoption. The Canadian adoption is unique because Canada: first, is the first G7 non-European country to adopt IFRS; second, had pursued a dual strategy of harmonizing with the US GAAP while supporting IFRS convergence; third, provided information environment that mitigates the problems associated with measuring the effects of IFRS adoption in the European countries where IFRS or its predecessor - international accounting standards - had permeated the reporting environment prior to the mandatory adoption in 2005; and fourth, allowed firms listed on the US exchanges to continue to use or adopt the US GAAP for financial reporting and thus, provided a group of benchmark firms drawn from the same social-political and economic environment as the treatment firms. The study clarifies prior inconsistent results from European samples.\n",
"role": "assistant"
}
] |
A functional and symbolic perspective to branding Australian SME wineries
|
[
"Branding",
"Small‐to‐medium‐sized enterprises",
"Winemaking",
"Australia"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\nIntroduction: The Australian wine industry has witnessed phenomenal growth in the number of wineries in recent years, growing from 530 in 1990 to more than 1,600 presently. Accompanying this growth has been a trend of market domination, with 94 per cent of Australia's production now accounted for by just 20 companies. Furthermore, retail consolidation led by the Australian supermarket chains is leaving the remaining wineries, of which 85 per cent are small producers, struggling to obtain shelf space in liquor stores (Evans, 2002). These characteristics have made the Australian market highly fragmented and increasingly competitive, forcing many wineries to place increasing emphasis on cellar door sales for survival (O'Neill et al., 2002; Getz, 2000).This competitive environment has prompted many academics to comment on the need for increased emphasis on marketing, particularly branding, to ensure success in the future (Beverland, 2000; Getz, 2000; Lockshin, 1997). Indeed, Reid (2002, p. 37) notes that \"the development of strong and desirable brands will be paramount to the future success of all wine companies\". Despite this general consensus, it appears that little empirical research has been undertaken on this aspect of the wine industry, particularly regarding small and medium-sized (SME) wineries. Acknowledging this and many other gaps in current literature, academics have commented on the need for more research to be undertaken on all aspects of the industry, particularly through case studies and comparisons of successful wineries, to further the knowledge in the area (Dowling and Getz, 2001; Hall et al., 2000; Getz, 2000).Despite the vital role that brands play in the successful marketing of wine, there appears to be a paucity of empirical research into branding in the wine industry. Beyond the recommendation that wineries place an increased emphasis on branding in their strategy, very few studies detail the concept of branding from the perspective of the winery. Additionally, marketing related inquiry focusing specifically on SME wineries has been minimal. Considering that 85 per cent of wineries in Australia can be classified as small, and a further 4 per cent classified as medium (ACIL, 2002), the lack of research regarding SME wineries appears concerning. This study aims to contribute to the research that is needed to fill a significant gap in the literature concerning marketing and branding of SME wineries. Guided by the research question \"How do small and medium-sized wineries in Southeast Queensland and the Canberra District approach branding in their marketing?\", this research inductively seeks further to understand branding from the perspective of the SME winery.\nLiterature review: The process of branding has been around for centuries as a means to distinguish the goods of one producer from those of another (Keller, 2003). Although various definitions appear in the literature, a brand can essentially be described as \"an identifiable product, service, person or place, augmented in such a way that the buyer or user perceives relevant, unique, sustainable added values which match their needs most closely\" (de Chernatony, 2001, p. 9). This definition of a brand highlights the added values that the consumer perceives inherent in a brand. Numerous discussions in the literature separate these added values into two, with a brand delivering functional value and symbolic value (Bhat and Reddy, 1998; de Chernatony et al., 2000; Belen del Rio et al., 2001; Meenaghan, 1995; Park et al., 1986; Travis, 2000).Functional values associated with a brand relate to the tangible, rationally assessed product performance benefits that satisfy consumers' practical needs (de Chernatony et al., 2000; Bhat and Reddy, 1998). Although it is important to promote functional differences in the brand, de Chernatony et al. (2000) note that symbolic values are more sustainable as a form of differentiation than functional values. Symbolic values associated with the brand, also termed emotional values, relate to the intangible feelings and symbolic benefits that satisfy the consumer's self-expression needs (Bhat and Reddy, 1998). Meenaghan (1995) additionally comments that it is the symbolic values inherent in a brand that send social signals on behalf of their consumers. Bhat and Reddy (1998) further highlight the multidimensional nature of brand symbolism, asserting that brands do not have to be positioned as prestige to tap into the symbolic needs of the consumer. Hence, the symbolic values may relate to a consumer's personality or self concept that includes fashionability, freedom of expression, prestige and exclusivity (Bhat and Reddy, 1998).Park et al. (1986) first proposed that a brand concept can either be functional or symbolic, with brands positioned as either, but not both. Bhat and Reddy (1998) further advanced the theory by proposing that functionality and symbolism are separate components, with it being possible for a brand to have both symbolic and functional appeal. Therefore, it appears that consumers do not have any trouble accepting brands that have both symbolic and functional appeal (Bhat and Reddy, 1998). De Chernatony et al. (2000), who propose that the value of a brand is a multidimensional construct which includes both functional and symbolic benefits as perceived by consumers, further support Bhat and Reddy's (1998) findings.As wine has increasingly been regarded as a lifestyle product (WFA, 2002), it seems beneficial to embrace the concept of symbolic value that relates to consumer's self expression needs in research on branding in the wine industry. Apart from Getz (2000), who notes that a brand requires a blending of functional and symbolic values, Lockshin et al. (2000) appear to be the only authors who discuss a brand's functional and symbolic values in the context of the wine industry. Lockshin et al. (2000), without an apparent empirical basis, adopt Bhat and Reddy's (1998) proposition that brands can simultaneously have both functional and symbolic appeal and recommend to practitioners that equal importance be placed on developing both the functional differences and the symbolic values of wine brands.\nConceptual framework: In theory building research, no matter how inductive the approach, a prior view is needed of the general constructs or categories that are to be studied (Voss et al., 2002). Miles and Huberman (1994) suggest doing this through construction of a conceptual framework. Such a framework underlies the research by explaining, either graphically or in narrative form, the main things that are to be studied and compels the researcher to think carefully and selectively about the constructs and variables to be included in the study (Voss et al., 2002). Hence, the conceptual framework guides the theory development process by providing a tentative framework within which it is anticipated that the questions, theory and data fit together (Fawcett and Downs, 1992).Much of the extant literature relating to winery operations and marketing generally distinguishes two types of wineries: those who focus predominantly on production and those who have an additional focus on marketing (Beverland, 2000; Getz, 2000; Reid, 2002; Shelton, 2001). Shelton (2001) terms the two general types product driven wineries and market driven wineries. Getz (2000) presents the most concise description of these two general types. He notes that a portion of the wine industry is focused on production, while other wineries have a dedicated marketing strategy in addition to ensuring a focus on production.The two-type separation of winery operations described above forms the basis of the conceptual framework. Shown in Figure 1, the model presents a tentative prediction of the emergence of two approaches to branding based on branding sophistication.The first predicted type that guided the investigation is one with a low level of branding sophistication. Based partially on previous descriptions in the literature of product driven wineries, this typology is characterised with a dominant focus on the production of wine and a relatively limited approach to marketing and promotion aspects such as branding.The second predicted type that guided the investigation is one with a higher level of branding sophistication. Based partially on previous descriptions in the literature of wineries with both a focus on production and a dedicated marketing strategy, this typology is characterised with an increased focus on marketing and promotion aspects such as branding.The case research method, anchored in the qualitative research domain, was used to reveal the details and profile the characteristics of each of the two typologies. Bhat and Reddy's (1998) brand theory, discussed previously in chapter two, was used to interpret this information in terms of the functional and symbolic values of each case brand. Additionally, it is important to highlight the tentative nature of the conceptual model. Establishing the possibility of two general approaches to branding provided a frame of reference to focus on certain themes throughout the investigation. Therefore the two-type typology was open to modification during the case research process, depending on the emergent findings.\nResearch method: Qualitative case study\nFindings: The characteristics of each of the eight cases are summarised in Table II. A range of ages, sizes, and ways of doing business were witnessed across all cases.Analysis of the case study data resulted in the emergence of a model of SME winery branding, shown in Figure 2. Detailed within the model are two approaches that the wineries under investigation were seen to employ to brand their winery. The first approach has been termed product driven branding, with the second approach termed marketing driven branding. Three of the wineries investigated were found to implement product driven branding, with the remaining five wineries investigated found to implement marketing driven branding. In addition to distinguishing two approaches to branding, the model also details the functional and symbolic values inherent in the SME winery brand.Although differences emerged as to how the wineries approached their branding, all of the wineries were found to have several common features that acted as a foundation to their branding. The combination of these features can be regarded as essentially inherent in the nature of SME wineries and acting as a generic base common to both approaches to branding, illustrated in Figure 2. Six common factors were found to make up this foundation:1. producing a premium product;2. conveying an image of quality wines;3. using a name and symbol to represent the winery;4. forming business relationships and networks;5. participating in regional events, festivals and shows; and6. providing friendly service at the cellar door.Product-driven branding\nDiscussion and conclusion: Two main findings of the research contribute to the better understanding of how SME wineries approach branding in their marketing:1. the emergence of two approaches that the SME wineries took for branding; and2. the identification of functional and symbolic values inherent in the SME winery brand.The identification of two approaches to branding, although not discussed previously in relation to brand development, appears to be supported by literature discussing the operational focus of wineries. As established previously, the literature relating to winery operations generally distinguishes two types of wineries: those who focus predominantly on production and those who have an additional focus on marketing. The identification of wineries with a product driven approach to branding appears to identify with Ali-Knight and Charters' (1999)regard for some wineries being predominantly focused on wine production. However, the findings of this study conflict with Ali-Knight and Charters' (1999) view that these wineries have little understanding of marketing. Several SME wineries identified to have a product driven approach to branding were found to have a sound understanding of marketing and how to brand their wines effectively. The identification of wineries with a marketing driven approach to branding appears to identify with Getz's (2000) regard for some wineries to have a dedicated marketing strategy that incorporates tourism to build brand equity. Building on this description, the current research has highlighted the role the experience at the winery, and its associated symbolic values, plays in the in the development of the brand.The identification of functional and symbolic values associated with the SME winery brand is well supported by the current theoretical framework on brand functionality and symbolism. Bhat and Reddy's (1998) theoretical framework asserts that a brand represents both functional performance attributes and intangible symbolic values for the consumer. Both of the emergent approaches to branding resulted in functional and symbolic values associated with the SME winery brand, a finding that is supported by Bhat and Reddy's (1998) notion that brands can simultaneously have both functional and symbolic appeal.Regardless of whether a SME winery implements a strategy of product driven branding or marketing driven branding, skills are needed to identify and manage the desired functional and symbolic values of the SME winery brand. With de Chernatony et al. (2000) indicating that the symbolic values are a more sustainable form of differentiation than functional values, an emphasis on developing the symbolism inherent in the brand should be rewarded with more sustainable differentiation.Implications for branding theory\nFurther research: The exploratory nature of the study prompted the adoption of a qualitative approach, which in turn provided a richness and depth of understanding of branding from the perspective of the SME winery. However, the findings and the emergent model should be seen as a tentative theory that requires further investigation. Further qualitative research that investigates a greater number of cases may provide further refinement and clarification of the model. Additionally, to be able to test and generalise the findings further to a wider population, future quantitative research that incorporates a large-scale survey of randomly selected wineries is required.The scope of the research was confined to SME wineries in just two of Australia's wine districts, both of which are not traditionally considered to be grape growing or wine producing areas. Therefore, further research is recommended that incorporates different regions to establish the branding similarities and differences between SME wineries in different regions of Australia.\nConclusion: The increasingly competitive nature of the Australian wine industry has resulted in the need for wineries to place a greater emphasis on the development of a strong brand identity that will differentiate them from the plethora of other wineries in the market (van Zanten and Bruwer, 2002). Despite this, empirical investigation into branding in the wine industry has been minimal, particularly regarding SME wineries. The current study adopted a theory building approach to the research situation in order to further understand branding from the perspective of the SME winery. Anchored in the qualitative domain, a multiple case study methodology was used to investigate the branding of eight SME wineries in two regions of Australia. Data analysis resulted in the development of a model depicting a two-type typology of SME winery branding. Also emerging from analysis of the case data was the identification of functional and symbolic values inherent in the SME winery brand. In addition to providing a better understanding of branding from the perspective of the SME winery, these findings have also set a foundation for further research concerning SME winery branding.\n",
"role": "user"
},
{
"content": "Purpose: - This study proposes investigating the branding of small to medium-sized enterprise (SME) wineries in an Australian context. By taking a qualitative approach, the theory building research seeks further to understand branding from the perspective of the SME winery, and in doing so, go some way in addressing the current deficit in the literature.\nDesign/methodology/approach: - Bhat and Reddy's conceptualisation of brand functionality and symbolism is used as a branding framework to underlie the research. A multiple case study design was adopted as a research method to provide case data on eight SME wineries. Data were collected through in-depth interviews with the owner/manager of each winery, direct observation and document analysis.\nFindings: - The findings are presented in the form a model of SME winery branding, which, in addition to distinguishing two approaches to branding, highlights the functional and symbolic values inherent in the brand. The findings endorse the notion that brands can simultaneously have both functional and symbolic appeal. More radically, the emergent model suggests interdependence between the functional and symbolic properties of branding.\nPractical implications: - Practically, the findings highlight the importance of developing the symbolic values associated with the brand, which represent a more sustainable competitive advantage.\nOriginality/value: - By establishing a tentative theory on SME winery branding, this study has begun to address the current deficit in wine marketing literature and has set a foundation for further research.\n",
"role": "assistant"
}
] |
Life is not a shopping cart: three keys to building brands and improving customer loyalty
|
[
"Brand loyalty",
"Brand management",
"Customer relations",
"Loyalty schemes"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: According to a recent study by the American research firm Yankelovich, the average consumer sees about six million marketing messages per year. That's 16,000 marketing impressions per day. That same study also found that 65 percent of surveyed consumers are annoyed by the sheer volume of messages flooding everyday life. Interestingly, less than 10 percent of marketers believe consumers feel this way. Worse still, almost 60 percent of consumers feel these marketing contacts have no relevance to their lives. Yet less than 15 percent of marketers believe consumers feel this way.\nBuilding brand loyalty: How do companies create brand loyalty in a marketplace populated by demanding and fickle consumers? Frankly, a brand is a promise. And promises are meant to be kept. At the Air Miles Reward Program, we've done a lot of thinking about brand loyalty and the significance of customer experiences. We've learned that life is not a shopping cart - and we've recognized this in the way we look at our brand and the role we play with our collectors. This mindset has helped to sharpen our focus on who we are and what we do. Together with our business partners It has helped us deliver on promises made.Most consumers spend their money on what they need. But very few consumers get the chance to spend their money on who they are - to express themselves with what they've earned. We offer AIR MILES reward miles so that everyone has the chance to say, \"These everyday purchases are the things that I need, but this reward - well, this is who I am.\" The airline tickets, home electronics, gift certificates and other items in our redemption portfolio represent the tangible rewards of program participation. But the intangible rewards - the true rewards through which customer loyalty is fully realized - lie in the stories, photos, memories and emotional experiences the program sets free.Thomas Edison once said, \"Vision without execution is hallucination.\" In that light, organizations can no longer push brands out into the marketplace and expect them to sell. To sustain brand loyalty, companies must focus on deeper, more meaningful, relevant and mutually beneficial relationships with their customers. Here are a few thoughts to get you started:* Avoid the quick fix: the challenge for the loyalty industry is to move beyond the traditional thinking that has dictated today's consumer/retailer relationships, and to move away from the quick hits that drive immediate sales at the expense of creating emotional bonds that drive longer-term profitable relationships. The status quo is no longer acceptable - nor is it sustainable. Less customer loyalty means more opportunities for competitors to eat away at market share.* Manage the customer experience: a successful loyalty initiative is predicated first on creating a meaningful consumer experience. The brand promise must be clear, compelling and consistent across all consumer touch points. When managed with the consumer experience as its guiding principle, loyalty marketing offers businesses the opportunity to facilitate deeper, more meaningful, relevant interactions with customers. The financial benefits of this approach are clear: long-term, profitable consumer and partner relationships.* Derive customer insights: the true power of a loyalty program lies in its ability to glean customer insights through the data obtained as a result of a well-run program. These valuable consumer insights can help any organization manage the way their customer experiences their products and services. Companies that use program data to understand the who, why and how of customer behavior will be well on the road to properly defining and executing against their brand promise.The most successful loyalty initiatives enable organizations to answer the hard questions. Who are my best customers? How do they feel about my brand? Am I contributing to their overall positive experience? Can I retain their business over the long term? With the resulting data, you can take a sharper, more strategic focus on the comprehensive retail or service experience, and avoid the marketing disconnect evident in the Yankelovich study. That's how your brand takes on a new meaning to consumers. That's the stuff that instills loyalty.\n",
"role": "user"
},
{
"content": "Purpose: - With the average consumer receiving six million marketing messages each year, it is increasingly difficult for marketers to achieve their desired impact. The purpose of this article is to provide ideas on sustaining brand loyalty in a crowded marketplace.\nDesign/methodology/approach: - The paper cites a recent study to explain the reason for a disconnect between consumers and marketers, and why some brand loyalty programs fall short. It then provides the author's ideas for rebuilding a relationship with consumers, citing examples from the Air Miles Rewards Program.\nFindings: - By incorporating the three ideas presented within the article, it is believed that marketers can create deeper, more meaningful, relevant and mutually beneficial relationships with their customers.\nPractical implications: - Although consumers are assaulted by six million marketing messages each year, marketers can still take steps to keep their customers loyal. The paper reminds marketers that a brand is a promise and offers ideas on sustaining brand loyalty among consumers.\nOriginality/value: - The paper examines results of a recent survey which shows a disconnect between marketers and their marketplace. It then provides ideas on repairing that disconnect.\n",
"role": "assistant"
}
] |
Electrical fault diagnosis in induction motors using local extremes analysis
|
[
"Detection faults",
"Envelope techniques",
"Predictive maintenance",
"Vibration analysis"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: A modern predictive maintenance program for electric motors must include several monitoring techniques in order to observe the operational performance parameters, such as temperature, electric current, magnetic flux, torque, vibrations, and so on, as well as being able to avoid, forecast, and detect faults. According to published surveys (Benbouzid, 2000; Zhang et al., 2011; Lamim Filho et al., 2013), induction motor failures include: internal faults such as short circuit of motor leads, low insulation, ground faults, broken bearings, and broken rotor bars; and external faults such as phase failure, unbalanced voltage supplies, mechanical overload, blocked rotor. These faults modify the modulation amplitudes, which are inherent in the vibration, current and magnetic flux signals, and produce a variety of undesirable phenomena such as parasitic torques, stray losses and magnetic noise.\nFFH-algorithm: For a modulated time sequence\nReal applications and comments: The faults were inserted in a three-phase motor: WEG (FH 88747), squirrel cage rotor, 5 CV, 1,730 rpm, 220 V, 13.8 A, 60 Hz, four poles, category N, 44 bars and 36 slots. A DC generator feeding a bank of resistance is used as a load system, Plate 1 shows the experimental setup. In all, 60 spectra of vibration were acquired in a series of ten tests for each form of excitation (without fault, two, four, eight turns shorted and unbalanced phases) and randomly repeated under full load conditions. To simulate a low isolation among turns from the same phase, tappings were made into the coil at four positions to enable two, four or eight turns to be short circuited externally. These tappings were connected externally and linked in series (two each time) with a resistance bank of 1 O, 100 Watts (each one) connected in parallel and added to the circuit in order to control the current intensity of short circuit to approximately 10 A. This current is near the nominal rated current (Lamim Filho et al., 2013).\nConclusions: It was possible to note that the detection of the fault frequency components, related to rotor slot frequencies, detected as a vibration signal, is extremely difficult to be clearly identified using only a spectral analysis by Fourier transform. The use of auxiliary signal processing tools is fundamental for the extraction of this \"masked\" fault information in the signal. Similarly to the classical demodulation by HT, the proposed technique is also a tool that allows the modulations related to the fault frequencies to be more clearly highlighted. It is a technique of easy computational implementation in any programming code which, consequently, enables it to be used in real predictive maintenance programs in industry.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to introduce an algorithm based only on local extreme analysis of a time sequence to further the detection and diagnosis of inter-turn short circuits and unbalanced voltage supply using vibration signals.\nDesign/methodology/approach: The upper and lower extreme envelopes from a modulated and oscillatory time sequence present a particular characteristic being of, theoretically, symmetrical versions with regard to amplitude reflection around the time axis. Thus, one may say that they carry the same characteristics in terms of waveforms and, consequently, frequency content. These envelopes can easily be built by an interpolation process of the local extremes, maximums and minimums, from the original time sequence. Similar to modulator signals, they contain more detailed and useful information about the required electrical fault frequencies.\nFindings: Results show the efficiency of the proposed algorithm and its relevance to detecting and diagnosing faults in induction motors with the advantage of being a technique that is easy to implement in any computational code.\nPractical implications: A laboratory investigation carried out through an experimental setup for the study of faults, mainly related to the stator winding inter-turn short circuit and voltage phase unbalance, is presented.\nOriginality/value: The main contribution of the work is the presentation of an alternative tool to demodulate signals which may be used in real applications like the detection of faults in three-phase induction machines.\n",
"role": "assistant"
}
] |
Mistaking a marketing perspective for ethical analysis: when consumers can't know that they should want
|
[
"Ethics",
"Customer satisfaction",
"Consumers"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: In Gene Roddenberry's vision of a future society shown in the Star Trek television series and movies, the peaceful exploring humans sometimes confronted the villainous Ferengi, a society of marketing-term-spouting avarice-driven businessmen traveling through space searching for profit by any means necessary. This mirrors many other forms of the modern USA popular culture in which business profits provide the motive for all sorts of murder and mayhem. Even contemporary comic book versions of Superman altered his arch-foe Lex Luthor from an insane scientist seeking world domination to a brilliant businessman, a de facto graduate of the Ferengi School of Business Ethics who does not feel properly appreciated for his success.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to point out the distinction between needs which marketing aims to satisfy, and the consumers' true needs for their own health and safety. While marketing professionals might cite consumer satisfaction as serving consumers, they are mistaking marketing for morality.\nDesign/methodology/approach: - Gives reference to some articles of recent research literature on consumers in the marketplace, with emphasis on the consumers' not-uncommon inability to know what they should need or want.\nFindings: - What satisfies or interests segments of consumers is not the same as understanding the consumers' interests.\nOriginality/value: - Points out an important distinction that marketing professionals wish to ignore. What is in consumers' best interests, the proper or ethical decision, might not be what most consumers perceive as their needs. Sometimes, even if something is in the consumers' best interests, there does not exist a segment that knows it should be needed.\n",
"role": "assistant"
}
] |
Gaps and barriers along the North Carolina agri-food value chain
|
[
"Value chain",
"Delphi technique",
"Farm-to-market",
"Food systems",
"Value-added processing"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Regional and local agriculture has been shown to support rural development, create jobs, and secure food production, positively impacting the health of residents and economies (Herath and Hill, 2013). However, even in regions with significant agricultural output and resources, residents and economies have not always benefited. Reasons for this disconnect between resources and expected benefits vary and are often context dependent (Bloom and Hinrichs, 2011). The underlying causes can be better understood by researching the issues that exist with the actors and within the activities that connect farms and other food producers to the end consumer, known as the agri-food value chain (Rominger et al., 2012; Passuello et al., 2015). Identifying issues along the agri-food value chain could help its members (e.g. farmers, food producers, food retailers, and consumers) mitigate these issues, and help improve the chain's effectiveness and impact on local and regional economies (Bloom and Hinrichs, 2011; Kaplinsky and Morris, 2013; Grandori, 2015).\nBackground: Food value chains\nSetting: Agriculture and agribusiness play a primary role in the NC economy. The combined food-related industries in NC (agriculture, fishing, food processing, and outlets) employ over 420,000 workers and contribute $78 billion to the economy (Herath and Hill, 2013; North Carolina Department of Commerce, 2014). NC enjoys significant and diverse agriculture and aquaculture output (Curtis et al., 2010), with food producers in coastal, plains, foothills, and mountain regions growing and harvesting livestock, as well as a wide variety of crops and seafood. The southeastern region of the state grows sweet potatoes, peanuts, soybeans, grapes, blueberries, and a wide range of animal proteins. Western NC is conducive to the cultivation of fruit orchards as well as vegetable production (Curtis et al., 2010).\nMethodology: Delphi technique\nResults: The sample in R1 ranged (according to response rate) from 206 to 212 and 195 to 199 in R2, yielding a 32.5-35.3 percent response rate. In R1 of the survey, 57.3 percent of respondents were female, with a similar distribution of 55.6 percent in R2. The majority of respondents were between 50 and 59 years old (R1=28.5 percent, R2=32.8 percent), closely followed by the 40-49 years old group (R1=25.1 percent, R2=22.2 percent). The majority of respondents reported a race/ethnicity of white, non-Hispanic (R1=91.4 percent, R2=89.2 percent). Demographic information of the panel participants can be found in Table III.\nAggregated categories of issues: The key issues identified through the Delphi study offer a number of insights into some of the primary concerns shared by the members of the agri-food value chain in NC. Interestingly, the critical issues cited by respondents who self-identified with a particular stage did not necessarily correspond with that stage, something that belies an interconnectedness, however constructed, among the different value chain members.\nDiscussion and implications: The themes that emerged from the data represent the major issues expressed by the participants in the study, as members of the agri-food value chain in NC. The overarching theme of connectedness suggests cooperative and collaborative attitudes of the participants with local communities, with policy makers at both the local and state level, and with other members of the value chain. Institutional connectedness concerns suggest that participants understand the importance of institutions in the value chain, not only to support and promote their own interests, but also the interests of other value chain members. Similarly, the items within the connectedness theme imply that the participants view their value chain relationships as holistic, even though they may interface with only one or two other value chain members.\nLimitations and future research: One limitation of this research was the subjective nature of dividing the agri-food value chain into stages, labeling issues, and assigning issues into different stages within the value chain. While efforts were made to include a complete set of response categories for each question, omissions were still made. Additionally, each stage was not represented equally, which is a limitation of using the Delphi study method for multiple groups of stakeholders. Future research should include stronger measures to solicit each category of respondent. Also, the focus of this project and subsequent lists of issues are specific to NC, and should be reevaluated when attempting to apply them in other locations. The data collected from this research suggests the need for a more extensive survey directed at the general public that will determine the public perception of the agri-food value chain in NC, and more specifically, differences between various stakeholders and public groups.\nConclusion: Farm to table, agritoursim, and the emphasis on local food chains are all still taking hold in NC and other states and regions. Too often, researchers focus attention and efforts on the matters at hand, and rarely do stakeholders have time to reflect on the bigger picture and nuances of how members of the agri-food value chain connect. Encompassed in a value-chain analysis approach, the results provide insights that may otherwise not have been brought to light. One of the intended outcomes of this project is to inspire innovation in products, programs, processes, organization, and marketing. It is also hoped that the data collected will prompt advances that strengthen the agri-food economy, and increase availability of local, healthy food to all of the sector audiences represented in this study.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is twofold: first, to formulate an exhaustive list of the issues, gaps, and barriers at each level of the agri-food value chain in North Carolina (NC), and second, to identify the issues of greatest importance to its members.\nDesign/methodology/approach: This research employed the Delphi technique in two stages of input. The first round of input was designed to create a comprehensive list of issues for each of nine \"stages\" of the agri-food value chain. In round two, the issues were prioritized.\nFindings: The top ten responses of each stage were aggregated into themes that represent the most critical issues identified by respondents: connectedness within the value chain, access to markets and marketing, affordability/availability of food and food distribution, farm profitability, societal awareness, and education about healthy, local food, and supporting institutions.\nOriginality/value: The findings could be used by practitioners to inspire innovation in food-related products, programs, processes, organization, and marketing. The findings can help farmers, institutions, food distributors, policy makers, and other members of the agri-food value chain to make decisions about food distribution and access in NC and in other states facing similar issues and circumstances. The findings of this research also have further reaching implications, such as the connectivity of members along the agri-food value chain, the impact of a strong agri-food value chain on agritourism and the potential value of state marketing initiatives.\n",
"role": "assistant"
}
] |
Strategic information disclosure, integrated reporting and the role of intellectual capital
|
[
"Value relevance",
"Integrated reporting",
"Intellectual capital",
"Equilibrium",
"Pharmaceutical sector",
"Strategic information disclosure"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The value relevance of intellectual capital (IC) and integrated reporting (IR) has been extensively studied in the literature. With a few exceptions (e.g. Garanina and Dumay, 2017), prior works have mostly considered them in isolation, either focussing on the impact of IC disclosure on a firm's value and financing cost or on the value relevance of IR. The theoretical foundations underlying a manager's choice to disclose IC and its relation to IR have been mostly left untouched. IC is largely subject to voluntary disclosure, which this paper considers a strategic communication decision to influence investors' perception of a company[1], as discussed in Verrecchia's (1983) and Dye's (1985) seminal works. If a manager is concerned with his or her company's market value and discloses \"non-fundamental\" information to offer its best representation, measuring the value relevance of IR and IC disclosure could be more difficult than expected, as the estimation of a standard linear regression model could be invalidated by self-selection (Heckman, 1978; Leuz and Verrecchia, 2000).\n2. Literature review: The disclosure of IC is a broadly accepted condition to assess a company's value efficiently and make investment decisions consciously (Brennan and Connel, 2001; Bukh, 2003; Yongvanich and Guthrie, 2005; Swartz et al. 2006; Ghosh and Wu, 2007; Zambon and Marzo, 2007; Li et al., 2008; Alwert et al., 2009; Abhayawansa and Guthrie, 2010; Gamerschlag, 2013). To this extent, several authors have pointed out that conventional value metrics have progressively become complementary and have therefore partially lost their absolute predominance (Collins et al., 1997; Barth and Clinch, 1998; Francis and Shipper, 1999; Lev and Zarowin, 1999; Breton and Taffler, 2001; Bozzolan et al., 2003; Brown et al., 1999; Givoly et al., 2016). Various studies suggest that investors and other stakeholders consider IC disclosure useful in understanding long-term growth opportunities (Edvisson and Malone, 1997; Cumby and Conrod, 2001; Eccles et al., 2001; Holland and Johanson, 2003; Cuganesan and Dumay, 2009; Abhayawansa and Guthrie, 2010).\n3. Theoretical framework: This paper studies the choice of adopting an integrated report in a simple two-period exchange economy, where each manager holds private information regarding the probable success of one's business.\n4. Research methodology: The validity of this study's theory is tested by considering the pharmaceutical sector, where IC largely contributes to the development of drugs that are able to pass clinical tests and be approved by regulatory authorities. Thus, IC can be considered the key value-relevant information disclosed with IR.\n5. Results: 5.1 Preliminary findings\n6. Conclusions: 6.1 Results and contributions to the literature\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to use a theoretical and empirical model to investigate the adoption of the integrated reporting (IR) framework as a strategic choice to signal intellectual capital (IC) to equity investors, with specific reference to the pharmaceutical industry.\nDesign/methodology/approach: The choice of drafting an integrated report is modelled as a means for managers to strategically disclose price-relevant information related to IC. The voluntary disclosure model developed by Verrecchia (1983) is used, also introducing the role of financial analysts to derive a directly reproducible empirical equation.\nFindings: Theoretically, as IR requires managers to exert an effort in reporting activity, this work shows that in equilibrium, only firms with sufficient IC have decided to adopt IR, resulting in rational investors' willingness to pay more only for the forecasted earnings of integrated reporters. This theory is tested in the pharmaceutical sector, where the modelling choice is probably more valid, with mixed results.\nResearch limitations/implications: When compliant with the International Integrated Reporting Council's (IIRC) standards, IR provides the means to disclose IC in a perfectly verifiable way. Furthermore, since the IIRC has only recently been established, the conclusions have only been tested on a limited data set.\nOriginality/value: This work connects the value relevance of IR to IC by adopting an equilibrium approach, which, in turn, provides specific indications of how to build a consistent empirical test of the theory.\n",
"role": "assistant"
}
] |
Understanding young consumers' tendencies regarding eco-labelled products
|
[
"Young consumers",
"Structural equation modelling",
"Eco-label",
"Green consumerism"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Environmental concerns (EC) have grown as production systems pollute the natural and social environment, and as products or services are developed without attention to maintaining a green identity. Increased EC leads to the concept of green consumerism (Elkington et al., 1990; Tanner and Kast, 2003; Sonderskov and Daugbjerg, 2011), with consumers becoming aware of the environmental impact of their purchase decisions, which unintentionally orients their attention toward environmentally friendly products. However, environmental consciousness, concern, awareness or positive attitudes toward green products do not necessarily mean that green consumerism in a society will grow, or that environmental-friendly behavior will develop (Thogersen, 2000; Gallastegui, 2002). Although environmentally conscious consumers appeared in the West during the 1960s and 1970s in response to the environmental effects of industrial development (Mostafa, 2007), there are also some consumers who remain uninvolved in green consumer groups. Higher costs are still the main concern of consumers, especially in developing countries. In such countries, product selection is, therefore, driven by domestic interests rather than by the products' environmental impacts (Guevara et al., 1997). At this stage, environmental or eco-labeling become crucially important for informing consumers about the environmental background of products or services to make them aware of environmentally harmful products and change their preferences in favor of environmentally friendly products (Anderson, 1990; Gallastegui, 2002). Through eco-labeling, consumers can understand the environmental impact of their purchase decisions while producers can inform the market about their green initiatives. This also helps producers to account for their environmental impacts of their operations, and thus increase their market shares (Morris, 1997). Therefore, eco-labels are important, first to protect the environment, and second, to ensure the sustainability of consumer behavior (Gallastegui, 2002).\n2. The role of eco-labels in green consumerism: Eco-labels are tools to indicate and communicate the environmental performance of a product (Gallastegui, 2002) with the aim of supporting consumer decision-making for environmentally friendly products by providing transparency and enhancing trust in the environmental identities of products (Thogersen, 2002). There have been a variety of studies on eco-labels (e.g. Gallastegui, 2002; De Pelsmacker et al., 2005; Sammer and Wustenhagen, 2006; Thogersen et al., 2010). Although several studies show that positive consumer response to eco-labels is increasing the market share of benign products (Teisl et al., 2008; Thogersen, 2002; Grankvist et al., 2004), it is actually a hard goal to make consumers pay attention to eco-labels (Thogersen, 2002).\n3. Methodology: The following discussion first identifies potential factors influencing consumers' approaches to eco-labeled products through a wide-ranging review to provide an extensive overview of various scales measuring constructs related to environmental knowledge, EC, environmentally friendly products and consumption practices. The questionnaire for this study is mainly developed from items selected from these measures, based on the context of the study. However, although the measures in our model are drawn from previous research, we have chosen first to implement EFA to re-specify the model. This is necessary, because, the adopted measures were all validated in developed countries, so there was a risk of sample data to being affected by cultural factors related to different societal consumption habits in the emerging country, Turkey (Cabuk et al., 2008; Aracioglu and Tatlidil, 2009). Green consumer behaviors are affected by different perceptions, attitudes, values or other subcultural aspects (Pedersen and Neergaard, 2006), which may lead to different results for young consumer groups in Turkey. This became evident when the researchers were administering the questionnaires to the respondents, noticing that some perceived them very differently to the concepts, or revealed different cultural values in their responses. These characteristics of the sample data moved the analysis into an exploratory mode (Anderson and Gerbing, 1988; Cui et al., 2005). In such situations, researchers can choose to apply either EFA or confirmatory factor analysis (CFA) to re-specify the model. In this study, the research model is first specified by employing EFA to gain information on all factor loadings and the cross-loadings. This helped to identify how the items correlated to form conceptual dimensions, which is a necessary step, especially in cases where measurement scales are being verified in new cultural contexts (Cui et al., 2005). We then evaluated the modified scale identified by EFA using CFA to check whether it supported the modified scale (Gerbing and Hamilton, 1997; Cui et al., 2005). These analyses helped us purify and validate the measures before formulating and testing the hypotheses. Our CFA used the same sample data for cross-validation since it was exploratory in this case (Van Prooijen and Van Der Kloot, 2001; Cui et al., 2005). Testing the fit of CFA on the same set of data as the EFA is useful before judging further fit. (Van Prooijen and Van Der Kloot, 2001). Thus, by using the same sample, we assessed whether the EFA results were replicated by CFA (Byrne, 1989; Van Prooijen and Van Der Kloot, 2001).\n4. SEM for hypotheses testing and findings: Our SEM model was constructed based on the measurement model purified and validated through EFA and CFA. Using SEM, we assessed the proposed structural model and tested the hypothesized relationships among the variables. The results are shown in Figure 2. Goodness-of-fit was 0.954 while adjusted goodness-of-fit was 0.931, indicating that 93 percent of the variance and covariance in the data observed were predicted by the proposed model. The normed kh2 value (kh2/df) was 1.488 and comparative fit index (CFI) was 0.976. Moreover, root mean square residual (RMR =0.044) and root mean square error of approximation (RMSEA =0.039) were all below the stated 0.05 threshold level.\n5. Discussion: Due to the rising importance of environmental issues, a considerable amount of research has been undertaken to examine environmental consumption behavior and its antecedents (e.g. Chan, 2001; Barr et al., 2003; Tanner and Kast, 2003; Ohtomo and Hirose, 2007; Biswas and Roy, 2015). Studies on green purchasing behavior have been conducted in western or developed countries (Tanner and Kast, 2003), emerging or developing eastern cultures (Chan, 2001; Mostafa, 2007; Ohtomo and Hirose, 2007; Biswas and Roy, 2015) and, more recently, a small number in cross-cultural contexts encompassing western and eastern cultures (Konuk et al., 2015). However, only a few studies have investigated eco-labeling (Thogersen, 2000; Egea and Frutos, 2013). This paper has therefore contributed to understanding the current eco-label purchase behavior tendencies and their antecedents of Turkish young consumers. Our study is distinct from the small number of Turkey-specific green consumerism studies (e.g. Konuk et al., 2015) by focusing directly on eco-label perceptions in an emerging country.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to explore different factors influencing young consumers' approaches to eco-labeled products in an emerging country, Turkey.\nDesign/methodology/approach: Relevant measures were adapted from the previous literature to assess key constructs on environment and eco-label perceptions. First, an exploratory factor analysis was employed for identifying the key dimensions, and then structural equation modeling was conducted for testing the research hypotheses.\nFindings: The findings reveal that the existence of perceived environmental knowledge has an influence on eco-labeled product purchase tendencies, with environmental concern (EC) having a significant mediating effect.\nPractical implications: This study contributes to practice by addressing perceptual factors affecting young consumers in emerging markets such as Turkey. This relationship can be utilized to increase the tendency to purchase eco-labeled products to create EC enhancing programs in education in addition to environmental knowledge leveraging ones. Besides, these findings may also be beneficial in eco-labeled product marketing efforts.\nOriginality/value: This study provides value for the literature by investigating the perception of young consumers in an emerging market specifically, and adopts a more specific eco-label focus, which it integrates with an environmental perspective.\n",
"role": "assistant"
}
] |
Rethinking balanced scorecard (BSC) measures: formative versus reflective measurement models
|
[
"Performance management",
"PLS",
"Balanced scorecard",
"Structural equation model",
"Formative indicator",
"Reflective indicator",
"Covariance structure model"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The balanced scorecard (BSC) is a representative performance measurement system widely studied since its introduction in 1992 by Norton and Kaplan (Lee et al., 2013; Perkins et al., 2014). BSC was adopted by 64 percent of the top 1,000 American companies in 2000 and 20 percent of OECD members in 2003, and this trend continues (Balanced Scorecard Collaborative, 2003; Cunha Callado and Jack, 2015; Grigoroudis et al., 2012; Niven, 2011; Reefke and Trocchi, 2013). The BSC allows companies to discover key performance indicators that conform to abstract latent variables within the BSC perspectives (e.g. learning and growth, internal process, customers, and financial). It also assigns strategic weight (Kaplan and Norton, 2000).\nTheoretical background and hypothesis: Formative vs reflective measurement\nMeasurement issues in BSC studies: In the studies on corporate performance measurement systems, researchers should understand the indicators' characteristics to avoid measurement misspecification. The BSC is an integrated corporate strategy model for corporate practice (Kaplan and Norton, 2001), demonstrating the correlations among indicators on corporate core competencies (Kaplan and Norton, 1996). The performance measurement model of the working-level corporate practice should employ formative indicators rather than reflective ones (Fornell and Bookstein, 1982). It is suggested that firms must determine the relative influence of performance indicators, the correlations of the key performance indicators to better identify which measure to support or ignore, and that those indicators may vary depending on corporate strategies or growth stages (Miller and Friesen, 1983; Sohn et al., 2003). A formative measurement approach provides analytical information on the relative influence of indicators as well as latent variables.\nResearch model and hypotheses development: As shown in Figure 1, our conceptual model is a reconstitution of the causal relationship between BSC perspectives. In previous BSC studies, a measurement model was set with reflective indicators only. These studies analyzed the relative influence of BSC perspectives on financial performance at the latent variable level. However, they failed to analyze the impact of each indicator belonging to the latent variables. If a measurement model is constructed with formative indicators as shown in Figure 1, however, it is possible to understand the relative influence of indicators in addition to the latent variables.\nResearch method: Sample\nAnalysis and result: Existing research shows few standardized comparison methods for formative and reflective models. We organize the model comparison criteria as shown in Table III from a compilation of previous research. This study then presents analysis results into two parts. First, we discuss the measurement model to confirm the convergent and discriminant validity, and reliability of the constructs in the two models. Then, we discuss the structural model to test the hypothesized relationship among the constructs. Finally, we demonstrate the model comparison results in terms of the goodness of fit ratio, and the appropriateness of formative BSC model adoption as a corporate performance measurement.\nComparison of path analysis in the research model: Significance test of path coefficient by BSC perspectives\nDiscussion and lessons learned: The result of the measurement model comparison proves that both reflective and formative measurement models showed statistical validity and reliability. In addition, the comparison of the BSC's perspectives was causally interrelated in both the reflective and formative measurement models. Our result confirms that companies should try to achieve better performance in their learning and growth perspective, likely to result in improved financial performance. However, the result of goodness of fit test revealed that the formative measurement model better explains the BSC causality model. Thus, it is more valid to set up formative indicators in the BSC measurement for corporate performance.\nConclusion, limitation, and future study direction: This study proved the usefulness of the formative measure analysis method and suggested its practical use, using the BSC indicators most useful in developing corporate strategies. Above all, we showed that formative indicators should be used in the corporate environment by overcoming the limitations of conventional studies that were confined to causal relationships with latent variable level; our formative model can analyze causal relationships of indicators to-latent variable level in a more specific and empirical way. It is useful for researchers and practitioners to compare not only the relative effects of high-level latent variables, but those of low-level latent variables as well. Whereas the most of the previous research on formative indicators focused on theoretical perspectives, this research addressed the practical assessment method and usefulness of the relative effects of formative indicators. With these relative effects, we found that there exist correlations among performance indicators and that the most influential indicators for the dependent variable have associations with other performance indicators. Most importantly, this study presents the practical guideline on the usage of reflective and formative constructs indicators using the PLS comparison test.\n",
"role": "user"
},
{
"content": "Purpose: Most empirical balanced scorecard (BSC) studies have shown a tendency to wrongly employ reflective indicators instead of the more theoretically suitable formative indicators. However, formative indicators are difficult to apply due to the lack of statistical software support and a standardized model testing method. The paper aims to discuss these issues.\nDesign/methodology/approach: This study empirically compares the reflective and formative measurement method with standardized model comparison criteria. After collecting 217 valid questionnaires from companies in South Korea, the authors applied a structural equation modeling technique to analyze the data.\nFindings: The result shows that the formative measure provides greater validity for the corporate performance measurement using BSC. Further, this study shows the indicators' relative influence on each BSC perspectives using the formative measure.\nPractical implications: This study proved the usefulness of the formative measure analysis method and suggested its practical use, focusing on the indicators most useful in developing corporate strategies. In addition, the authors showed that formative indicators could be used in the corporate environment by overcoming the limitations of conventional studies that were confined to causal relationships with latent variables.\nOriginality/value: This study may be the pioneering work that compares formative and reflective indicators simultaneously, addressing the usefulness of formative measurement and its application validity in the existing empirical studies using reflective measurements.\n",
"role": "assistant"
}
] |
CEO servant leadership and firm innovativeness in hotels: A multiple mediation model of encouragement of participation and employees' voice
|
[
"Firm innovativeness",
"CEO servant leadership",
"Encouragement of participation",
"Multiple mediation",
"Employees’ voice"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Businesses, and society more broadly, are increasingly welcoming leadership approaches which focus on followers' welfare (van Dierendonck, 2011). Servant leadership is the best example of such a leadership approach. Coined by Greenleaf (1977), and later developed as a theory over time (Liden et al., 2008; van Dierendonck, 2011), servant leadership concerns leaders whose primary purpose is to serve their followers by developing them to their fullest potential in numerous, varied areas such as self-motivation, leadership skills development (van Dierendonck and Patterson, 2015), and empowerment (Brownell, 2010; van Dierendonck and Rook, 2010). By doing so, servant leaders help to enhance the talented human capital of workers. This is very valuable in the current challenging and highly competitive climate, where the ability to respond and react to changes and new demands is paramount to survival (Slatten and Mehmetoglu, 2011). This is especially important in the hospitality industry, where it is not possible to predict and plan for every specific action needed to improve customer satisfaction. In this industry, having an empowered workforce that participates in organizational decisions can facilitate rapid responses to change and foster a strong orientation towards innovation (Bani-Melhem et al., 2018; Jimenez-Jimenez and Sanz-Valle, 2005).\nTheoretical framework and hypotheses: From CEOs to firm outcomes: the upper echelons theory\nMaterial and methods: A survey instrument was used to gather data from HR managers in the hospitality industry in Spain. Spain's tourism industry is highly competitive worldwide (World Economic Forum, 2017). It plays a strategic role in the Spanish economy, accounting for 14.2 per cent of total GDP and 14.5 per cent of total employment in 2016 (World Travel and Tourism Council, 2017), and includes a thriving hospitality industry whose customer service is responsible for its international attractiveness and success. The hospitality industry is also particularly suitable to host servant leadership approaches as optimal drivers of excellent customer service (Bouzari and Karatepe, 2017; Wu et al., 2013). As such, the hospitality industry in Spain offers a good opportunity to study the relationships proposed in this investigation. Specifically, data were gathered from HR managers of hotel accommodation firms with three, four or five stars, and employing more than 60 workers such that these hotels could have a sufficiently complex human structure to require a HR department. Sampling one informant per firm (i.e. the HR manager) is quite common in general and in HRM research (Bou-Llusar et al., 2016). After the questionnaire was pilot tested with a group of HR managers and academic experts, its clarity, comprehension, readability and suitability was confirmed. In pursuit of trustworthy information, the questionnaire expressly assured confidentiality. Access to the participating hotels was gained through personal contacts (HR managers) and by utilizing the snowballing technique through which contacts of contacts were targeted. In the end, 34 valid responses were received.\nResults: Partial least square-structural equation modeling results\nDiscussion and conclusions: Conclusions\n",
"role": "user"
},
{
"content": "Purpose: Drawing on the theories of servant leadership and upper echelons, this paper aims to highlight the mechanisms through which CEO servant leadership enhances firm innovativeness in hotels. This study aims to test a multiple mediation model by considering the mediating role of encouragement of participation (EoPART) - a high-performance human resources (HR) practice - and employees' voice (EVOICE) in sequence.\nDesign/methodology/approach: Data were collected from HR managers of 34 hotels in the hospitality industry in Spain, which represents an important international tourist destination. Two methods of rigorous data analysis were used (partial least squares [PLS], structural equation modeling and fuzzy-set qualitative comparative analysis [fs/QCA]), which enabled robust findings to be produced with minimal sample size requirements.\nFindings: CEO servant leadership had a positive indirect effect on firm innovativeness in hotels, via the sequential application of EoPART and EVOICE.\nResearch limitations/implications: The findings provide new HR-related insights regarding the encouragement of firm innovativeness in hotels: CEOs can boost innovativeness in their hotels through the development of EoPART systems, which in turn favor EVOICE.\nOriginality/value: This is one of the first studies to analyze whether CEO servant leadership has an impact on innovativeness in hotels. Moreover, this study is the first to show the internal mechanisms (EoPART, EVOICE) through which CEO servant leadership encourages hotel innovativeness.\n",
"role": "assistant"
}
] |
A fuzzy-based vulnerability assessment model for infrastructure networks incorporating reliability and centrality
|
[
"Centrality",
"Fuzzy inference system",
"Infrastructure networks",
"Reliability",
"Vulnerability"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Infrastructure networks (INs) are inextricably linked to different aspects of human life such as safety, security, economy, public health and social well-being. The term \"infrastructure networks\" generally refers to lifeline systems such as transportation, power, water and gas networks. As a general rule, the more developed and reliable INs in a country, the better the living standards. However, INs are exposed to a myriad of operational threats that lead to disruption and dysfunction of their performance. The concept of a network vulnerability is related to the potential for disrupting the network components and degrading them in such a way that affects the overall performance of the network (Murray and Grubesic, 2007). The complex nature of INs provides the conditions for which disruption of critical components propagates in numerous ways, resulting in an unforeseen and cumulative impact on the network failure (Thacker et al., 2017). Such complexity reinforces the need to implement an efficient and effective vulnerability assessment model in order to protect these lifeline systems.\n2. Centrality analysis: A complex network is characterized by a network of multiple interconnected interacting components, in which each component has a different level of impact on the dynamics of the network. A complex network is specified not only by its individual components' function but also by the characteristic influence of each component on its behavior. This situation raises questions such as \"How to identify a network's most influential components?,\" \"To what extent a particular component affects network performance?\" and \"Which components have a more central role than others?\" Centrality analysis was created in the late 1940s to answer these questions. After more than six decades, the idea of centrality is still alive and is being mobilized in an ever-widening range of applications (Freeman, 1978).\n3. Fuzzy set theory: This section is meant to account for the basic definitions of the fuzzy set theory. It also presents the concept of fuzzy logic followed by a discussion of different steps in a FIS development. It then proceeds to present an overview of existing fuzzy-based risk management methods.\n4. Application: As discussed in the introduction section, the overarching aim of this study is to develop a fuzzy-based vulnerability assessment model obtained from integration of the reliability and centrality values. To demonstrate the process of the FIS described in the preceding section, a water distribution network (as an example of INs), as shown in Figure 2, taken from the literature (Islam et al., 2014; Shuang et al., 2014, 2017) is used as a case study. The network is a looped water distribution network comprising 25 sink nodes connected by 38 pipes. Water is supplied from two reservoirs connected to nodes 1 and 3. The example network is first mapped into an undirected graph with 25 nodes and 38 edges. The reliability of edges is displayed in parentheses.\n5. Results and discussion: The crisp values of vulnerability score obtained from the suggested FIS method are reported in Table IV. Additionally, the vulnerability score is applied to rank the criticality of the edges in the case study. The gray column indicates the ranking of the example network's elements. The ranking is obtained from the vulnerability score, where the highest value holds the highest ranking implying the highest criticality.\n6. Conclusions: The methods presently used for vulnerability analysis of INs have typically followed two distinct approaches: probabilistic-based methods and network theory-based methods. While the probabilistic-based analysis addresses the vulnerability problem by measuring the probability of maintaining the performance of an IN components, the likelihood that these components become inoperable is missing in this approach. Conversely, the network theory-based methods have seldom progressed beyond evaluating the consequences of components failure, irrespective of the probability of failure. From a vulnerability standpoint, neither sole utilization of the probabilistic methods nor the stand-alone use of the network theory-based techniques are adequate for a realistic vulnerability assessment of INs. Instead, it must be looked at as a combination of these methods.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to attempt to shift away from an exclusive probabilistic viewpoint or a pure network theory-based perspective for vulnerability assessment of infrastructure networks (INs), toward an integrated framework that accounts for joint considerations of the consequences of component failure as well as the component reliability.\nDesign/methodology/approach: This work introduces a fuzzy inference system (FIS) model that deals with the problem of vulnerability analysis by mapping reliability and centrality to vulnerability. In the presented model, reliability and centrality are first fuzzified, then 16 different rules are defined and finally, a defuzzification process is conducted to obtain the model output, termed the vulnerability score. The FIS model developed herein attempts to explain the linkage between reliability and centrality so as to evaluate the degree of vulnerability for INs elements.\nFindings: This paper compared the effectiveness of the vulnerability score in criticality ranking of the components against the conventional vulnerability analysis methods. Comparison of the output of the proposed FIS model with the conventional vulnerability indices reveals the effectiveness of the vulnerability score in identifying the criticality of components. The model result showed the vulnerability score decreases by increasing reliability and decreasing centrality.\nPractical implications: Two key practical implications for vulnerability analysis of INs can be drawn from the suggested FIS model in this research. First, the maintenance strategy based on the vulnerability analysis proposed herein will provide an expert facilitator that helps infrastructure utilities to identify and prioritize the vulnerabilities. The second practical implication is especially valuable for designing an effective risk management framework, which allows for least cost decisions to be made for the protection of INs.\nOriginality/value: As part of the first contribution, we propose a novel fuzzy-based vulnerability assessment model in building a qualitative and quantitative picture of the vulnerability of INs. The second contribution is especially valuable for vulnerability analysis of INs by virtue of offering a key to understanding the component vulnerability principle as being constituted by the component likely behavior as well as the component importance in the network.\n",
"role": "assistant"
}
] |
The challenge of connecting and co-ordinating the learning agenda: A case study of a trade union learning centre in the UK
|
[
"Trade unions",
"Learning",
"Skills",
"Immigration"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The emergence of training and learning as a central plank of government and trade union renewal strategies has attracted the attention of many observers since the late 1990s (Forrester, 2000, 2004; Heyes and Stuart, 1998; Munro and Rainbird, 2004; Wallis et al., 2005). Increasing state investment in training, lifelong learning and education is a main feature of current regulatory politics. The centrality of education to the \"New\" Labour government marks a pivotal aspect of its political identity, furthermore the government views trade unions as institutions capable of operationalising some of their learning agenda, particularly in respect of adult learning in the workplace and potentially in the community (Wallis et al., 2005, p. 286). The government's commitment, and even desire, to involve trade unions in providing learning to the UK labour force was illustrated in 1998 when they set up the union learning fund (ULF) which made funds available to trade unions, through the Trades Union Congress (TUC), to develop innovative learning activities through a partnership framework. In terms of industrial relations, the current debate about trade union renewal and new forms of strategic innovation refers to the role of learning and training strategies being developed by trade unions in both the workplace and the community. Moreover, the 2002 British Employment Act conferred statutory rights to new union learning representatives (ULRs) within the workplace, which marks a new form of worker representation (Wallis et al., 2006).This paper sets out to examine a case study where innovative trade union strategies emerged in relation to the development of a learning centre. The learning centre addressed the training and learning needs of a workforce steeped in traditional understandings of learning and skills, and rarely exposed to the opportunity of a proactive approach to workplace learning. The case also highlights the role of learning in relation to black and minority ethnic (BME) groups.The paper is concerned with the challenges such new approaches bring: for example, how such projects can be adequately financed and staffed, where such approaches can be located and broader spatial and community issues and concerns, the potential role of any employer and whether this impacts upon union autonomy (Kelly, 1996; Kochan and Osterman, 1994; Oxenbridge and Brown, 2004, 2005). The debate on learning and union renewal, let alone its regulatory dimension, must address the new and old challenges facing such initiatives within new industrial relations. First, the limits of working with employers emerge due to the instability of employment and the manner of decision making within firms (Hoque and Bacon, 2006). Second, the focus on the workplace and the absence of consistent and community based trade union structures beyond it mean that learning initiatives may be vulnerable during phases of restructuring. Third, the changing criteria of state funding and the marketised approach that they take mean that there is a further instability built into the learning agenda.\nThe learning revolution: Background\nResearch methods: The case study company investigated for this research, Leathermill Plc[1], was a leather mill and tannery that produced high quality leather for many of the worlds leading brands of luxury and sports leather goods. The company's history can be traced to 1826, although the case study site in Leeds was established in 1975. The company was located in a large \"industrial estate\" on the outskirts of the city, predominantly manufacturing in nature. Nearby housing estates were mostly council owned and constituted \"less well off\" parts of the city. One such area was made up predominantly of Pakistani residents, although this area had recently experienced a large growth in eastern European migrants. Whilst voluntary sector networks and community led centres with links to the local council had improved community involvement and engagement in many aspects, there was a sense that trade unions and local employers did little to engage this community; this shall be addressed in more details in the following case study section. The trade union recognised within the workplace, and seeking to develop links into local communities was the union called \"Community\". Community was formed through the amalgamation of the steel union, the ISTC, and the textiles union, KFAT; their name change was to symbolise their greater commitment to developing relations and offering support within wider communities and to go \"beyond the workplace\".This particular case was selected for a number of reasons; First, the sector has a strong tradition of union organisation and so the benefits of \"learning partnerships\" based around \"robust\" rather than \"shallow\" relationships (Oxenbridge and Brown, 2004) can be assessed. Second, employees in this industry typically possess few, if any, formal qualifications and so are in greatest need of learning, however the sector is not one traditionally associated with advanced learning programmes, so representing a new and innovative approach in an old and traditional sector. And finally, this sector has witnessed a considerable increase in the number of migrant workers making it ideal for unions to advance their learning agenda to a previously untapped and vulnerable workforce.The research sought to illustrate how innovative trade union strategies are emerging around the learning agenda and how these are being used to extend union strategies within the workplace and the community, as well as being used to develop new approaches to organising, and subsequently representing, minority workers. On the other hand, the research intended to show how such learning strategies can be fraught with difficulties particularly when they are fixed into certain spaces leaving them unable to develop or become sustainable once these spaces are challenged or closed. A detailed, exploratory research technique was therefore required so as to identify hidden features of the employment relationship and identify, track and interpret the relationships as they developed as well as highlighting the reasons for the employer and trade union pursuing training initiatives. A case study approach was adopted relying predominantly on in-depth interviews over a one-year period. Initial interviews were semi-structured, with the interviewer utilising a list of key themes of interest whilst allowing interviewees to openly \"ramble\" and highlight what they felt to be of most importance. Subsequent interviews were often unstructured as the researchers became more trusted by interviewees who would discuss issues in an open unstructured manner. A total of six visits were made to the company with interviewees being interviewed on a number of times over this period. Interviewees included: the deputy general secretary of the union (interviewed twice), two regional union officials (interviewed three times each), the senior shop steward at the workplace (interviewed five times), a union learning representative (interviewed twice), the learning centre manager (interviewed five times), the managing director (interviewed once) and one employee of Iraqi origin, one employee of Polish origin, three \"white British\" employees and one ex employee. Furthermore, over 200 professional photographs were taken of the workplace and employees so as to compile a \"photographic history\" of the company and to compare working practices and the ethnic composition of the workforce to old photographs dating back as far as the nineteenth century which were passed to the authors by employees. This detailed research technique has generated both rich and detailed data.\nFrom shop floor to community: the many faces of the union learning centre: Background\nConclusion: The use of learning as a strategy of engagement and support raises highly complex issues. It is clearly a major plank of trade union responses to problems of social exclusion and the development of UK industrial relations systems. However, Stuart and Wallis (2007) has argued that one of the problems with vocational education and new forms of learning in the UK is what he terms the missing links. Stuart and Wallis (2007) argues that in the case of the UK the problem with training is that trade unions have four problems: missing members (where there are few other voices except managers as in non-union firms), missing partners (a lack of strong mutually supportive arrangements between management, unions and others), a missing demand for skills (the lack of a demand for high skills from employers), and missing employers (a lack of legal obligations for training on employers). This paper confirms Stuart's argument in terms of the missing partner: employers sustain partnership relations on very specific issues and the strategic empowerment of trade unions is less likely (Martinez Lucio and Perrett, 2007). The role of the state is also uncertain in terms of the issue of dependence on it. In this respect, the paper points to how funding began to change in terms of state priorities and how learning became subject to cutbacks and a reorientation of funds. The learning revolution has been subject to changes, which can increase the problem of sustainability. Developments regarding the challenge to public resources for ESOL courses in 2007-2008 highlight some of the problems.There is a question of structures as well in the form of uneven spaces, locations and reference points for learning strategies and resources. The paper has argued that developing sustainable spaces for learning strategies and clearly co-ordinated approaches faces various problems. For example, access to workplace learning centres is difficult in the light of workplace and company restrictions. The closed nature of workplaces means that activity within them can be secluded from the external community due to the nature of management prerogative and workplace based trade union customers. Also, relocating centres after closures may be an issue. The learning centre in our case found that moving to the external community in the wake of closure raised a new set of access issues. In this case study there were problems in terms of trade union structures and the failure of co-ordinated links and supports. The lack of co-ordinated approaches between the body of trade unions on the subject of learning raises problems of replication and a lack of effective utilisation of existing strategies. The TUC has therefore steadily located itself within this area in order to provide some strategic underpinning to the problem but issues remain. Another issue is the absence of community structures, meeting points, halls and reference points for local communities and workers regardless of the rhetoric of community unionism in the UK (Martinez Lucio and Perrett, 2006). The image of committed trade unionists clutching their computers and not knowing where to locate them until a sports club makes an offer is a telling metaphor of the issues we have raised in this paper. This raises the problem of there being, both, absent or unclear spaces for community based learning initiatives and the potential for competition and contestation from other organisations and community groups. The strategy in relation to communities does not give rise to any coherent picture of a community strategy: engagement along such lines is complex and differentiated. Many community initiatives misinterpret the broader realities and dynamics of communities: they have not developed clear links with local community groups, reference points for joint encounters and visible and tangible presence in the community. Trade unionists are often concerned with extending the logic of organising in ways that remain embedded in established cultural and institutional practices to BME communities for example without considering the cultural context of these and their spaces and points of engagement.In effect, what was an exemplary initiative for learning and social inclusion that was benchmarked throughout parts of the labour movement ended due to the way learning strategies were fixed into closed spaces being unable to develop once these spaces were challenged and closed. The issue of co-ordination in institutional and regulatory terms is usually discussed in terms of the workplace set of issues and their co-ordination (Stuart and Wallis, 2007), but this issue is one that also needs to be discussed if we are to appreciate the challenges brought by greater instability and change within the labour market. We are not suggesting that the tale of this learning centre is representative of learning initiatives as there are others based in communities which manage to anchor themselves in local needs, but it does appear that learning as a vital part of trade union strategies faces an economic context of change which is not effectively supported in regulatory or institutional terms by the state, employers and the fixed points of traditional industrial relations action. It is no wonder therefore that one of the main problems unions have in developing learning centres is not just a lack of funding but also in finding effective spaces in which to locate.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to illustrate how innovative trade union strategies are emerging around the learning agenda and how these are being used to extend union strategies within the workplace and the community. However, it also shows how such strategies can be confronted with difficulties particularly when they are located in fixed spaces leaving them unable to develop or become sustainable once these spaces are challenged or closed due to inconsistent state and employer support.\nDesign/methodology/approach: - An in depth case study approach was adopted for the research, using predominantly semi-structured interviews with a wide range of individuals over an 18 month period.\nFindings: - The paper suggests that although unions are increasingly pushing forward their learning agenda and developing workplace and community learning centres, which undoubtedly generate benefits for all stakeholders, their learning strategies are often fixed into specific, closed locations and spaces, finding it difficult to develop once these spaces are challenged or closed due to restructuring and organisational change.\nResearch limitations/implications: - The paper does not suggest that this case is representative of all learning initiatives but it does seem that learning as a part of the inclusion and union renewal agenda is not clearly co-ordinated in terms of their social, spatial and employment position.\nOriginality/value: - This paper represents new empirical research presented at a time when learning initiatives as well as community initiatives are high on government's agenda particularly as they are encouraging \"the third sector\" to play an increasing role in providing public services. Similarly, the paper addresses issues of ethnicity, migration and learning, again a topic high on government agendas.\n",
"role": "assistant"
}
] |
E-commerce and entrepreneurship in SMEs: case of myBot
|
[
"SMEs",
"Entrepreneurship",
"E-commerce",
"Case study",
"Contextualism",
"Interpretive",
"SMN"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: 1.1 Definitions and rationale\n2. Literature review: 2.1 E-commerce adoption and entrepreneurship in SMEs\n3. Research methodology: 3.1 The research approach and design\n4. Results: Given the framework of contextualism in Figure 1 that was discussed earlier, this section provides the constituent areas that interacted over time to give a picture of myBot's position relating to e-commerce entrepreneurship. Thus, the resultant position of the firm is a reflexive outcome of interactions of key components such as organisational characteristics, the structure and role; web application; Facebook and e-mail applications; managerial perception and intentions to innovate using Facebook; and the local business conditions that facilitated or limited the process of e-commerce entrepreneurship.\n5. Discussion of findings: This section is presented with respect to the research questions as follows.\n6. Conclusion: This study has explained how an SME, pseudo-named myBot in a developing country, has incorporated SMN in e-commerce adoption and entrepreneurship process. The research was undertaken using a qualitative and interpretive approach that involved close interaction with the SME manager to reveal contextual and social issues that affected the e-commerce development process. Primarily, social commerce using Facebook is the main types of e-commerce that ensued between the consumers in the USA and myBot. The main factors are trust between the participating actors, the manager's commitment and innovativeness.\n",
"role": "user"
},
{
"content": "Purpose: The adoption of e-commerce in small and medium enterprises (SMEs) is a much researched topic in developed and developing nations. The application of mobile devices, and social media networks (SMNs) such as Facebook has revolutionalised the e-commerce adoption process in SMEs. However, research into this area is still developing and there is a dearth of knowledge on how SMEs in developing countries enact and apply this phenomenon. The purpose of this paper is to explain how e-commerce entrepreneurship using SMN is conceptualised and applied in SMEs, and to highlight factors that influence this e-commerce innovation.\nDesign/methodology/approach: An interpretive and qualitative case study approach was adopted as a methodological foundation in a firm pseudo-named, myBot. The study employed the contextualism theory as a meta-theoretical framework to conceptualise and analyse the e-commerce entrepreneurship process in the firm. The study employed face-to-face semi-structured and unstructured interviews with the manager as primary means of obtaining in-depth information. Further information was obtained through website content analysis and document analysis of the firm's reports and observations.\nFindings: The results unearth a socially constructed feature of e-commerce entrepreneurship that draws from myBot's manager's innovative ability to use Facebook and e-mail to increase sales through its niche market. Factors that propelled e-commerce entrepreneurial activities using Facebook include trust, commitment and innovativeness.\nResearch limitations/implications: The revelation of this study may have changed due to the dynamic changes in the information and communication technology industry and how these are employed in a particular country and region of the world. Thus, results may not be generalised to other contexts. The study has a methodological limitation as it only had one informant who could respond to the study questions.\nPractical implications: The study provides a practical application of using the contextualism theory to analyse e-commerce entrepreneurship in SMN.\nSocial implications: The application of e-commerce using SMNs such as Facebook widens the scope of tools that SMEs may use.\nOriginality/value: The study provides a theoretical lens into the understanding of an e-commerce entrepreneurship process using a contextualism theory. The paper showcases how e-commerce using Facebook can be portrayed in a developing country context.\n",
"role": "assistant"
}
] |
Open innovation and firm performance: the role of organizational mechanisms
|
[
"Decentralization",
"Firm performance",
"Open innovation",
"Formalization",
"Low- and medium-technology firms"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Ever since the publication of Open Innovation, the New Imperative for Creating and Profiting from Technology by Chesbrough (2003), the concept of open innovation (OI) has attracted enormous interest, both in practice and in academia. By focusing on various aspects of the OI process, many scholars have offered useful insights and have proposed various frameworks to support managerial decision making (Huizingh, 2011). A number of literature reviews are available which structure the contributions to the field and identify the trends and topics that should be further analyzed (see, e.g. Schroll and Mild, 2012; West and Bogers, 2014; Greco et al., 2015; Randhawa et al., 2016). The present paper focuses on two issues that have been described as relevant topics in the OI field: the need to understand how to benefit from OI, particularly in terms of its implications for overall firm performance and the role of context variables (Huizingh, 2011; Lichtenthaler, 2011; Schroll and Mild, 2012).\n2. Theory and hypotheses development: OI was recently redefined by Chesbrough and Bogers (2014), who stressed the distributed nature of the innovation process and the purposively management of knowledge flows across firm's boundaries by linking it to the three main types of OI, namely outside-in (inbound), inside-out (outbound) and the combined coupled type, as well as the associated mechanisms, including pecuniary and non-pecuniary flows. This conceptualization reflects the broad set of possibilities to reflect openness and the variety of facets that can be considered to examine and measure it. Accordingly, the relationship between OI and firm performance has been empirically studied by examining the effect of a wide range of facets related to openness and firm performance with different approaches and metrics.\n3. Methods: 3.1 Sampling and data collection\n4. Findings: Hypotheses were tested through moderated hierarchical multiple linear regression analysis. To assess potential problems arising from multicollinearity, we checked bivariate correlations and calculated variance inflation factors (VIFs). Table V lists the correlation values; they are well below the recommended threshold suggested by Hair et al. (2010). Also, the VIFs for all variables are below 4.8, which indicate no multicollinearity issues (Table V).\n5. Discussion: Our results show that the adoption of an OI strategy has a positive effect on company performance in line with the existing works on different aspects of OI and a firm's performance (e.g. Cheng and Huizingh, 2014; Hung and Chou, 2013). In addition to the consensus on the benefits of openness, the specific examination on the effectiveness of the three types of OI enriches our understanding of the phenomenon.\n6. Conclusions, implications and limitations: 6.1 Conclusions\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine the relationship between the use of open innovation (OI) practices and firm performance, and the role of organizational mechanisms (OMs) (formalization and decentralization) as moderator variables.\nDesign/methodology/approach: The authors build a theoretical framework to define a set of hypotheses that are then verified in an empirical study. These theoretical propositions are tested by using the data gathered from a survey addressed to 244 firms in Spanish low- and medium-technology industries.\nFindings: With regard to inbound practices, the practices oriented to cooperate with partners in a R&D context have a positive influence. The results show that outbound practices, either by direct generation of revenues from licensing payments or, more indirectly, through the indirect marketing and technical benefits that can stem from revealing have a positive effect on firm performance. Coupled practices, which are related to participation in clusters and innovation networks, have the highest impact on firm performance. In the industrial context examined, decentralization exerts a positive effect which enhances the effect of outbound practices meanwhile formalization reduces their positive effect.\nPractical implications: This study helps practitioners in low- and medium-technology firms to determine which OI practices are most beneficial to firm performance and how formalization and decentralization can influence the relationship between OI and firm performance.\nOriginality/value: This study helps determine the influence of OI practices in terms of inbound, outbound and coupled types through an analysis of low- and medium-technology firms. The OI literature is enriched by the types herein of the role of OMs, which includes an analysis of how formalization and decentralization moderate the influence of OI practices on firm performance.\n",
"role": "assistant"
}
] |
A survey on the advancement of QA (quality assurance) to TQM (total quality management) for construction contractors in Hong Kong
|
[
"Construction industry",
"Quality",
"Quality assurance",
"Total quality management",
"Hong Kong"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Quality on construction projects and project success can be regarded as the fulfilment of expectations of those participants involved (Ahmed et al., 2005). Quality management is a crucial element of the successful management of construction projects (Abdul-Rahman, 1997). Quality management systems provide the framework in which the participants' expectation can be achieved. Quality and quality systems are topics, which have received increasing international attention (Chan, 2007; Yates and Anifors, 1997). Diverse management factors including support of senior management, appropriate leadership style, cultivation of employee's enthusiasm and participation, open communication and feedback must be managed properly to achieve a good quality management system in construction (Ahmed et al., 2005). Broadly speaking, there are two kinds of quality management systems, quality assurance (QA) and total quality management (TQM). Usually, QA is implemented before TQM; the former usually serves as a stepping-stone for an organization to embark on the latter (Tang et al., 2005). The purpose of this paper is to report an investigation on the advancement of QA to TQM for construction contractors in Hong Kong.\nLiterature review: Quality assurance (QA)\nMethodology: Further to the literature review, a questionnaire survey was designed for a full-scale survey. The questions on QA were developed primarily from the clauses in ISO 9001 (1994) while those on TQM were developed from the management principles in BS7850 (1992). Construction elements were integrated into the questions with reference made to techniques and experience in \"Construction quality management\" written by Tang et al. (2005). Moreover, the questionnaire was also developed based on the knowledge obtained from the literature review and the experience of the authors in quality management.In advance of the full-scale survey, a pilot set of questionnaire was sent to three practitioners in November 2007 to validate it and to test the effectiveness and ease of understanding of the survey questions. The final set of the questionnaire consisted of three parts. Part 0 was intended for gathering the organization information of the company in which the respondent was working. Part 1 focused on the ISO 9000 requirements while Part 2 was set based on TQM principles in the context of construction. Parts 1 and 2 of the questionnaire are shown in the Appendix of this paper. This questionnaire was then sent to professionals working in contractors in December 2007. Replies to these questionnaires were received during January and February 2008.The survey attempted to acquire how familiar was the industry with the ISO 9000 quality management applications. In the same survey, an enquiry was also made on how are the TQM principles and techniques are perceived. In the next two sections, discussions on the findings will be presented, and further approaches on how can quality management in the Hong Kong construction industry advance will be suggested.\nSummary of findings: Return rates and background of respondents\nDiscussion on findings: Quality awareness in the construction industry\nConclusions: Research literature tends to classify QA as a systematic and static approach while TQM as a dynamic people approach. There is a trend to think that QA and TQM are complementing each other. ISO 9000 has been applied as a QA model and practiced world wide across all manufacturing and service industries including construction, which is different from others in its uniqueness and interfacing characteristics. The Hong Kong construction industry has been actively practicing the ISO 9000 model for more than fifteen years, but the improvements in project quality are less than the industry's anticipation. Public clients have taken the lead to drive for total quality excellence at the beginning of the millennium.The results of the questionnaire survey on the current application status generally indicated full knowledge of the QA principles and considerable knowledge of the TQM principles. A detailed analysis of the scores assigned by the respondents however indicated that contractor organizations were fully familiar with the QA application but were less familiar with the TQM principles.In the questionnaire survey, the three QA requirements that were comparatively less adopted by contractors were identified to be clients' satisfaction survey, application of statistical tools and provision of training. On the other hand, the three mostly applied QA requirements were found to be an organisation's commitment to quality, independent auditing and review of tender documents prior to submission. In the TQM section, the items assigned with the lowest three scores by contractors were the introduction of the TQM principles and tools, the adoption of the value engineering approach at project commencement and the regular review of clients' project priorities. The top three scores in the TQM section were found to be responding to clients' complaints, resolving these complaints and continual review of construction safety and work environment. The score range of these items reflects the construction quality culture in Hong Kong. The culture is in turn influenced by the society's commercial mind set, the government's regulations and requirements, and the own culture of the contractors' organisations.Improved quality cannot be achieved in isolation; it requires the commitment of every one of the industry's participants to achieving excellence. Similar studies on the QA and TQM applications by other construction industry stakeholders such as clients and design consultants are warranted.Under the prevailing competitive business environment and the demanding quality excellence expected by customers, the issue of the way forward for quality management is pressing. A comparison of the prevailing quality system in Hong Kong with those more successful systems practiced in the Asian region and in the international construction sector could be conducted. An attempt to explore the root causes in the current restrictive quality culture and behaviour of the participants in the Hong Kong context is also recommended. Such a study could also be expanded to include the consideration of frequent component fabrication in mainland China.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to report an investigation on the advancement of QA to TQM for construction contractors in Hong Kong.\nDesign/methodology/approach: - The investigation is based on a questionnaire survey. The questionnaire was answered by 40 respondents working in construction contractors in Hong Kong including project managers, section managers, senior engineers and quality managers. The data obtained from the questionnaire survey were then analysed.\nFindings: - The results of the questionnaire survey are presented in the form of the levels of application of QA and TQM as practised by the contracting organizations in Hong Kong. The findings suggest that contractors are fully familiar with the application of QA but are less familiar with that of TQM. Both QA and TQM items that are mostly practised and seldom practised are discussed. In association with the findings, potential areas for exploratory studies in quality management systems and the quality culture of the Hong Kong construction industry are suggested.\nOriginality/value: - The paper explores new directions in the advancement of QA to TQM for construction contractors and offers help to practitioners in providing a direction for construction quality management improvement.\n",
"role": "assistant"
}
] |
Rooted in relationships: An analysis of dimensions of social capital enabling instructional coaching
|
[
"Social capital",
"Leadership",
"Collaboration",
"Coaching"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Standards, instructional materials, and high stakes testing affect schools and teaching in numerous ways (Anagnostopoulos and Rutledge, 2007; Booher-Jennings, 2005; Diamond, 2007). More specifically, Race to the Top's attention to teacher evaluation and Common Core's ambitious standards put instructional improvement at the center of reform efforts and placed pressure on the education system to raise educator quality and improve instruction. To meet these demands, states and districts turned to instructional coaching (Deussen et al., 2007; Neufeld and Roper, 2003), a capacity-building instrument for promoting individual and system-level instructional change (McDonnell and Elmore, 1987). Coaching systems and models are \"predicated on the notion that change efforts cannot be successful without building capacity for change\" (Mangin and Dunsmore, 2015, p. 180). Millions of dollars in federal funding, including Title II of the Elementary and Secondary Education Act and turnaround School Improvement Grants, are allocated toward coaching (Kutash and Nico, 2010). Instating this reform lever requires investments in financial and human capital to hire, train, and supervise coaches.\nObjectives: In this paper, we apply concepts of social capital to uncover organizational conditions associated with reform-oriented instructional coaching (ROIC). This paper responds to the following research questions:\nLiterature review: This study of organizational conditions associated with reform-oriented coaching is grounded in the literature on instructional coaching and social capital theory. Below, we briefly review major themes from the scholarship on instructional coaching. Then we define concepts from social capital theory, describe its application in the field of education, and explain its utility for understanding the enactment of coaching.\nMethods: Over the 2015-2016 school year, we conducted a qualitative case study of coaching in District D (Creswell, 1998)[1]. Located in a city in a northeastern state, this district served approximately 24,000 students in 45 K-12 schools; about 90 percent of its students were eligible for free/reduced lunch and over 40 percent are English learners. As District D was one of the state's lower performing districts, it faced significant accountability mandates from the state department of education.\nFindings: In District D, administrators invested considerable resources toward coaching; approximately $8 million was budgeted annually for coaching. This provided funding for site-based coaches, district-level supervision of coaches, and PD sessions for these coaches. Yet, coaches were based in schools, and the enactment of coaching, as a policy instrument, was influenced by school-level conditions. To comprehend the enactment of coaching, it is necessary to unpack the structures, systems, and activities enabling ROIC in schools (Coburn and Russell, 2008; Frank et al., 2004). After summarizing QCA findings on sets of conditions associated with a school's adoption of ROIC, we depict the structural and relational dimensions of social capital guiding the implementation of ROIC across schools.\nConclusions: This paper explores organizational factors that steer the implementation of ROIC. By depicting schools' social structures and processes, we reveal how the structural and relational dimensions of social capital facilitate ROIC. The paper's findings acknowledge that, in a single urban district, there exists a spectrum of coaching models amongst schools. We shed light on principals' role in defining coaching and setting up structures supportive of collaboration amongst leaders, coaches, and teachers. In this manner, the paper's findings indicate that the principal plays an active role in creating conditions that enable the adoption of ROIC. Our findings also reveal that, by managing structures and routines for teams (e.g. leadership and grade level team meetings), principals can reinforce the web of teachers and coaches - with benefits for coaching and school improvement.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to draw on the concepts of social capital in order to reveal the organizational conditions, including structural and relational factors, associated with reform-oriented instructional coaching (ROIC) in an urban school district.\nDesign/methodology/approach: An interpretivist approach was used to analyze organizational conditions enabling ROIC. Interview, observation and document data collected focused on coaching, leadership, and school-level organizational conditions. Qualitative data analyses, including coding and memoing, were used to summarize key information and quotes across data sources; this was followed by qualitative comparative analysis (QCA) to identify combinations of factors associated with reform-oriented coaching.\nFindings: The findings identified particular structures, systems, and activities enabling ROIC at the school level, with social capital playing a role in facilitating or impeding implementation of such work. That is, relationships, routines, norms, and webs of interaction enabled coaching. Principals' prioritization of coaching as an improvement lever and their persuasive framing of coaching, coupled with principal-coach collaboration, fostered a positive culture for ROIC.\nPractical implications: This paper points to the vital role of collaboration amongst administrators, coaches, and teachers. Principals play a significant role in defining coaching, setting up structures, and creating conditions supportive of the implementation of ROIC. By managing structures and routines, principals can encourage coaching aligned with reform efforts to yield positive outcomes.\nOriginality/value: This research advances the field's understanding of organizational factors influencing the enactment of ROIC. It uses QCA to reveal the value of leadership in shaping structural and relational conditions in a school site.\n",
"role": "assistant"
}
] |
A new process for creating points systems for prioritising patients for elective health services
|
[
"Elective health services",
"Prioritisation",
"Points system",
"Multi‐criteria decision‐making",
"Conjoint analysis",
"Health services",
"Decision making",
"Patients",
"New Zealand"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In public health systems worldwide there is insufficient capacity to be able to treat all patients for elective health services immediately. Patients must therefore be prioritised for access to treatment. One approach is to use points systems - also known as \"scoring\" or \"point-count\" systems - to rank patients. For example, this is done in New Zealand (Hadorn and Holmes, 1997a, b; MacCormick et al., 2003; Seddon et al., 1999; Halliwell, 1998; Dennett et al., 1998), Canada's Western provinces (Noseworthy et al., 2003) and parts of the UK (Ryan et al., 2001; MacCormick et al., 2003).A points system consists of criteria for deciding patients' relative priorities for treatment, where each criterion is demarcated into two or more categories. Each category is worth a certain number of points that is intended to reflect both the relative importance (\"weight\") of the criterion and its degree of achievement. Each patient is \"scored\" on the criteria and their corresponding point values summed to get a \"total score\", by which patients are ranked (prioritised) relative to each other. An example of a points system for prioritising patients for coronary artery bypass graft (CABG) surgery appears in Figure 1.Beginning in 2004, a new process for creating points systems was developed and applied in New Zealand. The objective was to overcome the limitations of earlier methodologies (discussed next) and to create points systems that are valid and reproducible and based on a consensus of clinical judgements. The process was initially applied to CABG and then successively to other elective services. Since 2008 the process has also been used in the public health systems of Canada's Western provinces.This article describes this process, which the authors were closely involved in developing and applying: Alison Hendry and Ray Naden were the project manager and clinical advisor respectively, Paul Hansen and Franz Ombler created the software used in the process, and Ralph Stewart was the \"clinical champion\" for CABG. The process is explained in a step-by-step manner so that it is possible for readers to apply it themselves to create points systems for their own patient-prioritisation applications. Because it was developed in New Zealand, as background we begin by briefly discussing the use of points systems in that country, and then we explain the process.\nPoints systems in New Zealand: In 1993 waiting lists in New Zealand were criticised for being \"a diverse mix of patient cases - placed and kept on the list for a number of different reasons, and with no agreed criteria for admission to the list\" (Fraser et al., 1993, p. 18). To overcome these weaknesses, and with the aim of increasing the transparency, consistency and fairness of how patients are prioritised, points systems were adopted nationwide in 1998 (Hadorn and Holmes, 1997a). Also known as \"National Clinical Priority Assessment Criteria\" (CPAC), they were introduced for CABG, cataract surgery, hip and knee replacement, prostatectomy for benign enlargement, infertility treatments, cholecystectomy, and tympanostomy tubes with otitis media with effusion.Consistent with international best practice at the time (MacCormick et al., 2003), these early points systems were created using an iterative modified-Delphi consensus process involving clinical leaders (Hadorn and Holmes, 1997a, b). For each elective service concerned, the clinicians first agreed on the relevant decision criteria (and categories within each criterion), representing clinical factors associated with the severity of the disease, and then directly specified the point values. These point values were then \"fine tuned\" based on the results from regressing clinicians' ratings of patients' urgency for treatment (expressed on a visual analogue scale) against the patients' characteristics in terms of the criteria.Unfortunately, within a year of being adopted, concerns arose about the construct and outcome validity of the points systems for CABG (Seddon et al., 1999), cataract surgery (Halliwell, 1998), and cholecystectomy (Dennett et al., 1998). A 2004 review commissioned by the Ministry of Health of the CABG points system found substantial geographical variations in its use and in prioritisation processes nationally (Diana North, Fiona Doolan-Noble, CABG Prioritisation Project: Final Report, unpublished). This points system was criticised for including criteria that poorly reflected patients' risks of surgery and capacities to benefit respectively and for having point values that were considered to be largely arbitrary. Given the above-mentioned points systems - and others, by implication - were widely judged to have been failures, it was decided that a new process for creating them was needed.In 2004 a project to develop a new process, initially to create a new CABG points system, was launched. The project was led by the Ministry of Health, initially in collaboration with the New Zealand region of the Cardiac Society of Australia and New Zealand (CSANZ). Advice was sought from the National Ethics Advisory Committee, the Medical Council, the Health & Disability Commission, the Human Rights Commission, and Maori (New Zealand's indigenous people) representatives. As the project progressed and the new process was refined, other points systems were successively created (and clinically endorsed) via collaborations with the relevant clinical professional organisations for hip and knee replacements, varicose veins surgery, cataract surgery, gynaecology, plastic surgery, otorhinolaryngology, and heart valve surgery. Other points systems are planned for the future.\nThe process for creating points systems: The process consists of the seven steps:1. Rank patient case vignettes using individual clinical judgements and then by consensus.2. Draft the criteria and the categories within each criterion for prioritising patients.3. Pre-test the criteria and categories and refine them.4. Consult with patient groups and other clinicians.5. Determine the point values for the criteria and categories.6. Check the test-retest reliability and face validity of the points system.7. Revise the points system as new evidence emerges or clinical judgements change.For each elective service concerned, the process is performed by a working group of clinical leaders in consultation with patient groups and other clinicians (Step 4). The process is supported by internet-based software known as \"1000Minds\" that enables the group to work together remotely and with teleconferencing and face-to-face meetings to assist group work when needed. Developed by two of the authors (Paul Hansen and Franz Ombler), the software is available for unfunded academic use for free from them or www.1000minds.com. Other applications of the software include health technology prioritisation (Golan et al., 2011), classifying individuals' risks of suffering from rheumatoid arthritis (Neogi et al., 2010) and measuring patient responses in clinical trials for chronic gout (Taylor et al., 2011). These are performed by participating clinicians (and, potentially, patients and other stakeholders) and supported by internet-based software and explained below.1. Rank patient case vignettes using individual clinical judgements and then by consensus\nDiscussion: Our experience (beginning in 2004) with using the process to create the points systems referred to earlier reveals it is acceptable to clinicians and their professional organisations as well as to patient groups (consulted with at Step 4). New Zealand's National Ethics Advisory Committee commented that the points systems created using the process have \"potential ethical advantages over alternative approaches, because [they emphasise] explicit rather than implicit bases for prioritisation, and [they aim] to prioritise amongst patients on nationally consistent grounds.\" (National Ethics Advisory Committee, 2006, p. 4) The process was a finalist for the 2006 New Zealand Health Innovation Award (Day, 2006).The points systems for CABG (MoH, 2008a), hip and knee replacements, cataract surgery, and gynaecology have been endorsed by their professional organisations (e.g. New Zealand Orthopaedic Association, 2007) and are in use nationwide. The points systems for plastic surgery and otorhinolaryngology (the two most recently completed) have been endorsed and are in the process of being implemented (MoH, 2008b).With assistance from two of the authors (Alison Hendry and Ray Naden), the process has also been used in the public health systems of Canada's Western provinces since 2008 (De Coster and Noseworthy, 2009). A points system for rheumatology has been pilot-tested successfully (Fitzgerald et al., 2011), and others for geriatrics, nephrology, and gastroenterology are being tested for reliability and validity (Step 6) (Noseworthy et al., 2009).As well as clinical expertise, prioritisation involves ethical judgements. It is likely that clinicians, patients and other stakeholders have different perspectives - including with respect to their judgements about need and potential to benefit. For the points systems for cataract surgery and plastic surgery referred to earlier, patients worked with clinicians to draft the criteria (Step 2) and determine their point values (Step 5). An area for future research is further investigation into the appropriateness and feasibility of involving patients and other stakeholders more fully in the process described in this article.\nAcknowledgements: The New Zealand Ministry of Health funded the project described in the article. The authors thank the project's participants, especially the clinicians and their professional organisations, and the two anonymous referees for the journal.All authors have extensive experience and expertise in various aspects of prioritisation decision-making. Ray Naden and Alison Hendry led and managed the project described in the article; Ralph Stewart was the clinical champion for applying the process to CABG; Franz Ombler and Paul Hansen created the software, which was created for the purpose of supporting the process explained in the article, and advised on multi-criteria decision-making methods. This article arose from the authors' experience working together over the last seven years to develop and implement the process described in the article.Potential conflict-of-interest disclosure: Paul Hansen and Franz Ombler own the software referred to which was created for the purpose of supporting the process explained herein. All other authors declare that they have no potential conflicts of interest.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to describe a new process for creating points systems - i.e. decision criteria and their point values - for prioritising patients for access to elective health services.\nDesign/methodology/approach: - The process was developed in New Zealand from a project the authors were closely involved in, beginning in 2004, to create new points systems, initially for coronary artery bypass graft (CABG) surgery and then successively for other elective services. The objective was to overcome the limitations of earlier methodologies for creating points systems.\nFindings: - The process, supported by internet-based software, consists of seven steps performed by a working group of clinical leaders for the elective service concerned, in consultation with patient groups and other clinicians. The authors' experience reveals it is acceptable to clinicians and their professional organisations as well as to patient groups.\nOriginality/value: - The process creates points systems that are valid and reproducible and based on a consensus of clinical judgements. The process is explained in a step-by-step manner so that it is possible for readers to apply it themselves to create points systems for their own patient-prioritisation applications.\n",
"role": "assistant"
}
] |
Sustaining success in challenging contexts: leadership in English schools
|
[
"Education",
"Leadership",
"Schools",
"United Kingdom",
"Teachers"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The headteachers were selected on the bases that the Department for Education and Science improvement data for their schools for the last four years (1999-2002) showed a general upward trend and that in the most recent Office for Standards in Education (OFSTED)[1] reports on the schools their leadership was described as being \"excellent\" or \"outstanding\". Further filters were applied relating to length of experience, gender, phase and socio-economic status (SES) of schools, pupil mix and geographical location. The schools in which the headteachers worked consist of one nursery/infant school, five primary schools and four comprehensive secondary schools, all of which are in what can be described as challenging urban circumstances. All the schools were located in neighbourhoods with relatively high levels of social deprivation, with the percentage of pupils known to be eligible for free school meals and the percentage of pupils identified as having special educational needs being well above the national average. On entry, the majority of pupils were underachieving:When they start, they are at a disadvantage ... their language development and social skills are poor (Primary Teacher, Primary 10).All of the schools are publicly owned and maintained by local education authorities in England. Three serve communities which are predominantly Muslim. None of the others have a significant proportion of ethnic minority pupils. Of the headteachers, six are men (three primary, three secondary) and four are women (one nursery/infant, two primary and one secondary). All but four of them are over 50 years of age. All have been in post for at least 5 years and two have been in post for over 20 years. Only one had held a previous headteacher post.When the heads had been appointed, the schools were either \"sinking\" or at best \"struggling\" (Stoll and Fink, 1996) in terms of their academic attainments and social ethos. During their tenure improvements in both pupil attainments and social ethos had occurred and had been sustained.One primary school was located in a docklands area of 34 per cent male adult unemployment, with \"appalling\" housing and some children subject to child abuse and problems with social services; and a secondary school which had been \"losing kids in droves\" when the present head arrived in 1990, now had increased numbers of pupils and was in the top 5 per cent for pupil examination results when measured against schools in similar circumstances. All the heads had \"battled\" persistently to win over their communities and establish a \"can do\" ethos amongst staff and parents:I think the most significant element that a headteacher brings to any organisation is the imprint on the ethos of the organisation (Principal, Secondary 1).Some heads had witnessed and worked successfully with an almost total change of pupil population. For example, one city primary school which was located in a Jewish community now had 80 per cent of pupils who were of Bangladeshi origin, with very few bi-lingual. Another primary school with a predominantly white population was situated in a former mining community which itself had been subject to change over the years.As well as documented evidence of sustained success through national pupil test and examination results, the headteachers' own views and the views of principal stakeholders were sought. Thus, students, governors, parents, teaching and non-teaching staff were interviewed in order to triangulate data as a means of establishing the trustworthiness of the accounts. Questions focussed upon the personal and professional contexts of, and influences on the heads and their schools - challenges of pupils, communities and policies; the school ethos; accounts of success; relationships, interactions, pupils results and reasons for the school's successes; and the heads' role in these.\nThe \"Standards\" agenda: The endless government pressure to improve has become part of the fabric. It's what I've come to expect. I think that the problem is if you become dominated by that to the exclusion of everything else, then it does become a pressure ... If you're successful the pressure's less than if you're under the cosh because your not successful (Head, Primary 4).The Government imposed \"standards\" agenda in the UK, with its focus upon the introduction and implementation of national strategies for measuring pupil achievement and compliancy of schools to externally derived standards of performance as illustrated by a national competency framework for principals (DfES, 2004) and external independent school inspection, has had the effect of:* bringing into sharp relief the complex interrelated personal, academic and social growth purposes of schooling; and* highlighting the role of heads in ensuring that the government's targets for improving pupils' attainment in particular areas of the curriculum are met.Its apparent focus upon rational forms of management planning with an emphasis upon \"performativity\" through target setting, assessment and measurable achievement has been found by increasing numbers of headteachers and teachers to be limited as a means of achieving success because its vision of what is needed to achieve better examination results fails to address the need to provide an education which matters both for the individual and society, i.e. an education which serves moral as well as instrumental purposes and which promotes the democratic ideals of the society. For principals as leaders of leaders, the choice is to become part of what some have called a culture of compliancy (Ball, 2000), to take a more subversive role or to mediate by making judgements about the way in which external agendas may be implemented in order to ensure that they are placed within their broader improvement agendas.Previous research on successful school leadership in schools in England (Day et al., 2000) had revealed that existing theories of successful school principalship were unable to contain or adequately reflect empirically these complexities, and the dilemmas and tensions which characterised the working lives of headteachers in England. By seeking the perspectives of a range of participants (stakeholders) it found that dilemmas - choosing between courses of actions which are \"to a greater or lesser extent mutually exclusive\" (Clark et al., 1999, p. 170); and tensions - defined as specific sets of pressures experienced by leaders in certain contexts which involved choices which were not necessarily mutually exclusive - were endemic to headteachers' lives. The research identified three dilemmas (development or dismissal of staff; power with or power over staff; sub-contracting or mediation of external change initiatives); and seven tensions (leadership versus management; development versus maintenance; internal versus external change; autocracy versus autonomy; personal time versus professional tasks; personal values versus institutional imperatives; leadership in small versus large schools). What seemed to characterise the lives of successful headteachers in that study - in addition to a strong sense of agency, value-based leadership and a range of generic qualities and skills - was their ability to manage these successfully. The research concluded that whilst socio, political, economic and professional contexts were important influences, successful heads were driven primarily by individual value systems. It seemed that moral purposes, emotional and intellectual commitment and ethical and social bonds were far more powerful levers of leadership than extrinsic agendas.The findings from the current research with successful headteachers in schools in challenging circumstances provide further evidence of the choices which successful headteachers seem to make. Three themes have emerged from the research which together indicate not only the intellectual, social and emotional complexities of successful leadership in schools in challenging circumstances, but also how these ten headteachers sustain their success in leading and in managing the dilemmas and tensions of the work lives of all in their school communities sustain their success:1. Moral purpose and social justice. Within this theme a number of related sub-themes were identified: vision and resilience; articulating and upholding values and beliefs; a focus upon moral purpose; and fostering an inclusive community.2. Organisational expectation and learning. Within these headteachers created expectations for high achievement of staff and students; built internal capital and capacity; and were \"lead learners\".3. Identity, trust and passionate commitment. Here, headteachers demonstrated the importance of defining and maintaining individual and collective identities, renewing trust and their passion for the work of educating.Moral purpose and social justice\nConclusion: Research across many countries shows that despite pressures from multiple policy implementation accountabilities, successful headteachers are those who place as much emphasis upon people and processes as they do upon product. What they do is not confined by the contexts in which they work. They do not comply, subvert, or overtly oppose. Rather they actively mediate and moderate within a set of core values and practices that transcend narrowly conceived improvement agendas. They remain constantly concerned with building and sustaining their schools as caring, values led, trustful collaborative communities rather than as quasi businesses. Within their management of competing tensions and dilemmas, they remain vision-oriented and people-centred (Blase and Blase, 1999; Day et al., 2000; Hallinger and Heck, 1999; Moos, 1999; Ribbins, 1999); and they place a priority on building trust through establishing cultures and systems that promote \"bottom up\" and enable \"top down\" approaches to succeed within notions of \"inside out\" school improvements (Barth, 1990).Given the reform contexts in England and their impact upon the ways in which schools must now conduct their business, whether it be teaching and learning the classroom or management and leadership, it would not be surprising, if headteachers, like teachers, were tempted to act as a sub-contractors of government agendas and to focus pre-eminently upon the school effectiveness and improvement mantras of effectiveness and efficiency to the exclusion of the broader pupil development agendas. It would be tempting also for the external observer to subscribe to the critics' view, so ably represented in England by Ball's (2000) work, that to succeed, leaders need to be compliant rather than reflective or critical. However, the research reported here provides empirical evidence that despite the pressures and consequent tensions, successful headteachers, like successful teachers, are resilient, and have found \"room to manoeuvre\" (Helsby, 1999). None of them believed themselves to be compliant. They were all concerned with values and achievement in leading in ways which built a sense of identity, community and achievement for all stakeholders. They also managed with integrity the emotions, tensions and dilemmas that are part of everyday life of teaching and learning in reform responsive schools of the twenty-first century.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to report multiperspective research on ten successful, experienced headteachers working in a range of urban and suburban schools of different sizes (with different school populations and free school meals indices of between 20 and 62 per cent).\nDesign/methodology/approach: - A discussion combining narrative and analysis.\nFindings: - The research revealed that the headteachers sustained their success by the application of a combination of essential leadership values, qualities and skills and that these enabled them to manage a number of tensions and dilemmas associated with the management of change.\nOriginality/value: - Illustrates that successful headteachers are those who place as much emphasis on people and processes as they do upon product: all had raised the levels of measurable pupil attainments in their schools and all were highly regarded by their peers. A key characteristic among the heads was that all revealed a passion for education, for pupils and for the communities in which they worked that this was recognised and appreciated by them, that they had translated their passion into practice, and that pupils' achievements had increased over a sustained period of time.\n",
"role": "assistant"
}
] |
Something-for-something love: the motivations of young women in Uganda
|
[
"Women",
"Uganda",
"Sexual behaviour",
"HIV"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Something-for-something love (SFSL) is a phrase popularised and used in Uganda, including by young women, and is a phenomenon that places them at a heightened health risk. SFSL, also known as transactional sex, is defined as engaging in sex in exchange for favours, money, gifts and goods (Calves et al., 1999; Moore and Biddlecom, 2006; Konde-lule et al., 1997). SFSL has increasingly drawn the attention of reproductive health researchers due to its connection with negative health outcomes for young women including early exposure to HIV. In Uganda, young women aged 15-19 has a HIV prevalence of around 3 per cent and approximately 75 per cent of youth infected with HIV/AIDS are female (Ministry of Health Uganda, MACRO, ORC, 2006). Evidence shows that SFSL is associated with sexual leverage and a lower rate of condom use (Luke and Kurz, 2002; Luke, 2003).Literature has suggested that the expectation of gifts in exchange for sex has become the norm among Ugandan teenage girls (Luke, 2003; Moore and Biddlecom, 2006). One study in rural Uganda revealed that 75 per cent of female participants stated that expectation of gifts was the main reason they had sex at their last intercourse (Moore and Biddlecom, 2006). The motivations for young women participating in these relationships are numerous and interlinked. Financial support is the main and largest motivator much publicised in research as young women rely on the income particularly for school fees (Moore and Biddlecom, 2006; Luke, 2003, Weissman et al., 2003). It is clear that in rural sub-Saharan Africa young women are at the lower end of the socio-economic spectrum and it is well documented that there are more women living in poverty than men (Weissman et al., 2003; Hallman, 2004; Topouzis, 1990). Coupled with gendered poverty is the prevalence of poverty among youths. Age often limits their ability to seek formal employment, to own land and receive financial credit, yet the expectations and need for young people to contribute to family livelihoods make them more vulnerable to poverty and exploitation (Topouzis, 1990; United Nations, 2004). The World Youth Report estimated that 19.8 per cent of all young people in sub-Saharan Africa today live on less than $1 per day (United Nations, 2004). Therefore, youth poverty coupled with gendered poverty suggests the significant disadvantaged position of young women in sub-Saharan Africa. This female poverty perpetuates and contributes to SFSL. Other documented motivators in literature include family and peer pressure (Moore and Biddlecom, 2006; Weissman et al., 2003).Economic exchange, coupled with peer, financial and family pressure create a power imbalance within a relationship and this directly affects young women's ability to negotiate sex. Gift giving of any kind within a relationship is strongly associated with sexual leverage and low rate of condom use (Luke and Kurz, 2002; Luke, 2003). Condom use is one crucial aspect of HIV prevention in which young women have limited, or no, control. Literature indicates that most young women are obliged to have sex with men who offer them gifts and are less likely to suggest condom use with these partners (Luke and Kurz, 2002; Hallman, 2004) The more gifts given; the weaker the young women feels in terms of being able to abstain from sexual interactions. Young women begin to feel that the men want something in return for their gifts and hence feel obliged to sleep with them (Ministry of Health Uganda, MACRO, ORC, 2006).In this study I aim to provide a greater understanding of young women's sexual relationships in order to gain insight into the risk and protective factors that render them vulnerable or provide resiliency in a risky environment. I am focusing on SFSL as this concept is receiving increasing attention yet the perspectives of young women themselves are less frequently highlighted.\nResearch methods: The study participants consisted of six in-school 13-18 year old young women, selected at random, from the Budondo region. Straight Talk Foundation (STF) Uganda; a health communication NGO that provides behavioural change communication (BCC) programmes for young people assisted in locating the schools in the Busoga region. The participants and their parents were contacted and consent obtained. Ethical approval was granted, by the University of Leeds, in conjunction with Makerere University in Uganda.I employed participatory methodologies consisting of illustrated drawings, life histories and writing stories (Stemler, 2001). These activities served as an icebreaker while also assisted in collecting background data. Next, participatory video (PV) was used which is a highly effective tool at engaging and mobilising marginalised people, such as this small group of teenagers in rural Uganda (Waite and Conn, 2010; Lunch and Lunch, 2006). Participants were taught and encouraged to use a video camera and to video themselves acting out dramas and conducting interviews. Videoed material was then played back to the participants and discussions were initiated on themes generated from the video. Creating videos is easy and accessible, and is an innovative way of bringing people together to explore issues, voice concerns or simply to be creative and tell stories.To supplement my findings, I also conducted semi-structured interviews (SSIs) within this group of women. I chose SSIs rather than focus-group discussions as they facilitate indepth discussion about a sensitive topic. SSIs were focused enough to collect important details about attitudes and experiences but also flexible enough to allow unforeseen and arising issues to be incorporated. In addition I conducted key informant interviews with two subject experts.Transcripts, stories and life histories were analysed using an inductive thematic approach (Stemler, 2001). General issues of interest had been determined prior to the interviews; however the specific nature of codes was not predetermined (Ezzy, 2002, Neuendorf, 2002). Analysis involved summarising and manual open coding of transcripts sentence by sentence. Comparison of these codes identified recurring concepts from which themes emerged.\nFindings: Financial gain\nDiscussion: The literature in this field appears to contain two broad portrayals of young women engaging in SFSL. The first projects young women as vulnerable, passive victims of the socio-economic environment that give rise to poverty, decreased employment opportunities and gender inequalities, which in turn is argued to force them to engage in risky involuntary sexual interactions in return for basic survival. The second portrays young women as more calculating, active social agents who carefully pick their interactions to maximise social gain. They often have multiple partners and view their sexuality as a valuable source of income (Moore and Biddlecom, 2006; Luke and Kurz, 2002; Luke, 2003; Weissman et al., 2003). However, this research indicates that many young women fall between these polarised portrayals.Another issue is that literature depicts SFSL as \"bad\" and purely functional; the main motivator being the need for financial security or gain. However, an important finding of this study is that, in a situation where the life worlds of young women are more complex than is often depicted, SFSL is also more complicated than this negative position would imply. There are various motivators for young women to engage in SFSL, ranging from voluntary to involuntary. SFSL has the potential to be coercive; however, it is important not to just label it as coercive as it may be regarded as a normal aspect of a romantic relationship.A study conducted by Save the Children (Weissman et al., 2003) (see Figure 1) represents a continuum of motivations and attempts to frame the complexity found, providing a more realistic portrayal of SFSL within young women's life worlds.According to the findings of this research, drivers of SFSL relationships fall along a continuum ranging from emotional security, material gain through to basic needs. This continuum suggests that not all young women must be considered as vulnerable passive victims; some are empowered and choose to form these relationships for emotional security and sex as part of a loving relationship. On one extreme of the continuum are those who are forced into sex out of a need for survival. The young women in my study described this continuum at various points; describing motivations ranged from voluntary, as those in a relationship for enjoyment, to involuntary, as those who engage in SFSL to obtain school fees and who may fall victim to sexual violence.It is important that appropriate interventions are implemented in response to the various scenarios along the continuum to protect young women from HIV/AIDS. For example, if a girl chooses to be in such a relationship to find love, an appropriate response is to help make that relationship safe and the practices protective. However another girl might be forced into SFSL for basic survival, therefore education fees provision and/or income generation opportunities are a more appropriate response.Another important element in SFSL uncovered in this research was the role that parents played in shaping or terminating these relationships. Literature tends to overlook the vital role that parents play in the personal lives and choices of their children, especially those concerning sexual health. Parents were blamed by participants in this study, for failing their children, which contributed to participants seeking SFSL, due to their belief that their parents did not want to, or could not, provide care for them.Young women need guidance from their parents and relatives to enable them to make informed choices about their sexual and reproductive health. In Busoga only 35.2 per cent of children felt able to speak to their parents on matters of sexual and reproductive heath (Straight Talk Foundation, 2005). In many areas in Uganda, parents have been blamed for forcing children to drop out of school and forcing marriages, which contributes to poor adolescent sexual health (Straight Talk Foundation, 2005). Current research reveals that family connectedness among young women serves as a \"super protector\" that may buffer young people from the risks of early sexual involvement (Lenzin et al., 2004).As this research reveals, young women's parental or guardian guidance strongly influences their decisions around SFSL. Parents/guardians may not realise that they implicitly encourage transactional sexual behaviour by pressuring their children to support themselves, and not adequately providing for them. Conversely, a parent who fails to take an active part in their child's relationships may fail to prevent the child's risky sexual behaviour and inadvertently encourage them. However, these dynamics must be placed in probable situations of overall economic constraints that shape parents' options.The continuum mentioned previously reflects the multiple motivators influencing young women to participate in SFSL. All motivators need to be taken into consideration when devising interventions to improve sexual health. As young women are less likely to negotiate the use of condoms when there is transaction of money or gifts involved in relationships, effective programs should also target men. Communication campaigns should acknowledge the existence of SFSL and continue to publicise the dangers of unprotected sex. In addition, programmes are required to assist in building the self-esteem of young women and attempting to reduce female poverty, for example, by creating more employment opportunities for young women so they are not as reliant on an income from SFSL.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to explore the motivations of young women in Busoga, Uganda, engaging in \"Something-for-something\" love (SFSL) relationships. Something-for-something love is defined as engaging in sex in exchange for money, favours, gifts and goods. This paper examines whether these relationships affect young women's ability to negotiate safe sex.\nDesign/methodology/approach: - Qualitative methods were used with a group of six young women including participatory video, drawing, stories and life histories. In addition, semi-structured interviews were conducted with each participant and two key informants. Data were analysed using an inductive thematic approach.\nFindings: - Peer pressure, financial gain, school fees, basic needs and family pressure were key motivations for SFSL. Power dynamics embedded in Ugandan local culture were found to play a significant role in SFSL and family pressure is suggested to assume a greater role than previously perceived. Transaction was considered to be part of what was expected within a \"natural romantic relationship\". Condom use was less likely in relationships where a great amount of gifts were exchanged, as men were more likely to negotiate sex on their terms, and this often led to unprotected sex.\nOriginality/value: - The study contributes much-needed insight into motivations behind young women participating in SFSL. Young women involved in such relationships are portrayed along a spectrum ranging from vulnerable to empowered. The study emphasises the role parents and other kin play in influencing their children's sexual and reproductive health choices. The study highlights to policy makers that interventions aimed at improving parent-child communication are needed; as well as a more informed approach to HIV prevention, taking onboard the complexities of SFSL.\n",
"role": "assistant"
}
] |
Consumers' willingness to pay for light, organic and PDO cheese: An experimental auction approach
|
[
"Spain",
"Food labels",
"Cheese",
"Auctions",
"Preferences"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Consumers are increasingly aware when shopping for food products, since they usually point out that they need more information about products' origin, their method of production or whether they can prevent certain health problems before they arise (de-Magistris and Gracia, 2014; de-Magistris et al., 2015b; Gracia and de-Magistris, 2016; Grunert, 2006). Health (e.g. nutritional information), origin (e.g. PDO and local) and process methods (e.g. organic) are credence attributes that are not revealed to consumers before they have bought and consumed the product, unless additional information is provided (Caswell et al., 2002). Therefore, the European Union (EU) has dedicated most of its policy to assuring that European consumers have access to complete information on the content, origin and composition of products with different regulations on food carrying: organic logos; quality labels (such as PDO and PGI); and nutritional claims.\n2. Experimental design and procedure: The home-grown auction was undertaken in a Spanish region during May and June 2012. Cheese products were chosen because their production has important economic and social effects, supporting the local economy in terms of jobs and income. Spain produces more than 150 different varieties of cheese and 32 of them have received PDO certification mainly due to the diverse topography and climate and the large variety of indigenous breeds (Mercasa, 2014). Spain has experienced a dramatic increase in cheese production during recent years, generating a turnover of 2,700 million euros (Mercasa, 2014). With regard to the Spanish demand, the annual per capita cheese consumption in 2014 was 9.3 kg with an associated expenditure of 60 euros per year, which represents 30 per cent of the total per capita expenditure on dairy products (Mercasa, 2014).\n3. Results and discussion: In total, 145 participants were selected randomly using a stratified sampling procedure by age, gender and education level. However, 16 participants who stated that they were not hungry before the experiment were excluded from the experimental analysis.\n4. Concluding remarks: Nowadays, individuals are concerned about the foods that they buy and consume; therefore, they lack information about their origin, their method of production or whether they can prevent certain health problems before they arise. Hence, food products that are organically produced and have the regional origin indicator (PDO) and/or a reduced fat claim could be an interesting choice for European consumers.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to assess consumers' willingness to pay (WTP) for three different food claims on semi-cured, pasteurized sheep milk cheese. In particular, the authors used a health-related claim (the nutritional claim indicating a reduced fat content: \"light\"), a regional claim (\"designation of origin - PDO\") and an organic claim (the European organic logo). Moreover, the authors investigated whether consumers' personal characteristics could influence their WTP for those types of cheese.\nDesign/methodology/approach: A home-grown experimental auction was applied in Spain during Spring 2012. The authors opted to use the nth random price with repeated rounds and without price feedback.\nFindings: The results show that consumers were willing to pay more for PDO cheese, followed by organic and light cheese. Moreover, respondents who were female, older and with a university-level education showed some environmental concerns, influencing their WTP for different cheeses.\nOriginality/value: Empirical evidence on consumers' preferences for PDO, organic and nutritional claims, evaluated jointly, is lacking in Spain. Moreover, the home-grown auction has several merits in terms of real market simulation and consumer preference application.\n",
"role": "assistant"
}
] |
Consumers' intentions of buying own-label premium food products
|
[
"Product management",
"Brand names",
"Economic performance",
"Olive oil",
"Greece"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\n1. Introduction: The brand name of any product has always been considered to be of great importance in determining perceptions of brand quality and attitudes towards the product (Srinivasan and Till, 2002). Branding has enabled producers to increase consumer awareness of the product, build consumer loyalty, and generate added value for products and companies (Prendergast and Marr, 1997a). However, many questions have been raised for both consumer brand owners and retailers by the success of own-label products (Yelkur, 2000).During the last decades, own labels have made a dynamic appearance in many product categories such as grocery, cosmetics and toiletries, household chemicals, paper products and clothing (Veloutsou et al., 2004). Furthermore, consumer acceptance of retail brands has grown such that many consumers believe they represent good value for money and trust retail brands as much as manufacturer brands (Laaksonen and Reynolds, 1995 - in Broadbridge and Morgan, 2001).While branded products use a manufacturer's name, sign or symbol to differ from other products, own-label brands are created and owned by a reseller (Kotler, 1991 - in Prendergast and Marr, 1997b). Own label products are \"products produced by, or on behalf of, retailers and sold under the retailers' own name or trade mark through their own outlets\" (Baltas, 1997).The most important advantage of own-labels is related to their low price, which is lower compared to the price of similar, competitive, branded products. This significant price differential between these products and their branded equivalent is mainly attributed to cost savings created by reduced packaging and promotional expenses (Prendergast and Marr, 1997b).Consumers' buying decision process for the purchase of this type of products is complex (Livesey and Lennon, 1978). The success of own label products is varied, as in some markets, such as the UK, they have taken a very large share of total grocery sales, but in other markets (such as the USA) this share remains relatively low compared to branded products (Yelkur, 2000).Moreover, consumers usually have different perceptions for different products, as each product category has certain characteristics, which in turn may have a significant effect on buying behaviour variables (Veloutsou et al., 2004). Therefore, each own-label product needs to be examined on its own merit (Prendergast and Marr, 1997a).Following this, even though there is an amount of published studies on own labels, no specific research on the purchase of own-label olive oil has been carried out, although olive oil is one the most important food product categories, where new market opportunities have opened, because of changes in lifestyles and cooking habits towards healthy eating, environmental concerns, exotic cuisine, and convenience food (Martinez et al., 2002).Apart from being trendy, olive oil is regarded as a product with particular attributes rather than ordinary cooking oil, and shares a premium perception, in that olive oil is not a price sensitive product (Martinez et al., 2002). Therefore, it would be interesting to examine consumers' buying behaviour towards an own-label low-priced product such as olive oil which at the same time enjoys a traditionally premium image. This realization points to the opportunity to further explore and provide insight into the relationships between particular variables that impact on attitudes towards own-labels and the purchase of own-label olive oil.Taking all these into account, this paper focuses on the case of own-label olive oil as a premium product category and aims at identifying the factors that impact on consumers' intention of purchasing this type of product, by synthesizing the research propositions and results from previous research studies and developing a conceptual model to be tested based on specific research hypotheses.Along these lines, the main research aim guiding the investigation was to explore the variables that affect customers' intention of buying own-label olive oil and to examine how these variables interrelate.The objectives of this research were:* To identify the variables that influence consumers' intention of buying own-label products, based on relevant previous studies.* To develop a research model for own-label olive oil purchase intention, which includes these variables.* To test this model via quantitative methods on a consumers' sample in Greece, to predict customers' purchase intention for own label olive oil.Greece was chosen as the research context of this study due to the vital role that olive oil has in the agricultural sector of the country and the high consumption rate of the product. Olive oil accounts for more than 10 per cent of the gross Greek agricultural product, and almost 5 per cent of total household spending on food (Lazaridis, 2000).To meet the objectives of this study, in the first part, previous studies relating to consumers' buying behaviour for both olive oil and own-label products are reviewed. In the second part, the research model and the methodology of the research are described and the results from a survey questionnaire, administered to Greek consumers, are presented and analysed. Finally, the implications of the study are discussed.\n2. Literature review: The rapid development of own label products offered by supermarket chains is changing the \"rules of the game\" in the food business industry. Many researchers have examined the development of these brands and consumers' behaviour for own-labels, mainly in relative mature markets where their use is widely spread (Veloutsou et al., 2004; Prendergast and Marr, 1997b). Following this, several factors have been identified to impact on purchase of own-labels and are presented in the next paragraphs.2.1. Factors that affect the purchase of own-label products\n3. Methodology: 3.1. The research model\n4. Data analysis: At the first stage of data analysis, the reliability of the scales was tested by computing Cronbach's alpha coefficient, average variance extracted (AVE), and composite reliability (CR). All scales of the latent variables were found to be reliable. For perceived benefits (PB), Cronbach's alpha was 0.763, AVE was 0.58, and CR was 0.70. For trust (TR), Cronbach's alpha was 0.8532, AVE was 0.70, and CR was 0.78. For brand loyalty (BL), Cronbach's alpha was 0.7001, AVE was 0.67, and CR was 0.69. For perceived economic situation (PES), Cronbach's alpha was 0.8431, AVE was 0.68, and CR was 0.81.Then, structural equation modeling was used to test the research model. Since one of the study's objectives was to test how several variables simultaneously affect consumers' purchase intention of own label olive oil, the use of structural equation modelling was necessitated. This method combines path models and confirmatory factor analysis, as it includes both observed variables and latent (unobserved) variables, whether independent or dependent (Schumacker and Lomax, 2004), and its main advantage is the ability to estimate and test relations between variables simultaneously (Hoyle, 1995).Using the technique of model development, the relationships that were suggested by the program modification indices were added to the initial model. As such, the final model was generated and is depicted in Figure 1.The research model was specified in a structural model and tested using the Amos 4.0 software. According to the results, kh2 statistics did not show a good fit of the model. This is however expected, according to Schumacker and Lomax (2004), who claim that as the sample size increases over 200, the kh2 statistic tends to indicate a significant probability level (p=0.000). Since all the important indicators of the model fit were above the accepted values, this model was considered to be an acceptable one.According to the final model, the intention of purchasing own-label olive oil is positively influenced by consumer attitudes towards this product and trust in own-labels, while it is negatively affected by brand loyalty and consumers' level of income. Consumer attitudes are positively influenced by the related perceived benefits and trust in own-label products, but receive a negative influence from income level and brand loyalty. Trust in own-label olive oil has a direct positive effect on the benefits consumers perceive they get from own-label olive oil but is negatively influenced by consumers' loyalty in branded olive oil.The results of the model-fit analysis are presented in Tables I and II and confirm that the final model is considered to be an acceptable one.Moreover, all the hypothesized relationships depicted in the model are found to be statistically significant, according to the related path coefficients. Following this, it seems that consumer's trust and positive attitudes towards own-label olive oil are the most important factors driving consumers' purchase intention in the suggested research model. Trust has also an indirect positive impact on purchase intention via its effect on consumer attitudes. Furthermore, the benefits that consumers perceive they get from own-label olive oil are influenced by the perceived economic situation and have an indirect but significant positive influence on their intention to purchase own-label olive oil.\n5. Conclusions and implications: The overall conclusion of this study is that although olive oil is traditionally considered to be a premium product, the purchase intention of own-label olive oil is influenced by the same variables that have been found to impact consumers' attitudes towards- and purchase intention for- other own-label products. This finding is of great importance as it opens new opportunities for retailers, who by taking advantage of their economies of scale and the marketing strategies they already implement for other own-labels, they can easily enter the market of own-label olive oil.Another important practical implication of the study refers to the role of consumer attitudes towards olive oil. Retailers should take into account that consumers would be willing to purchase own-label olive oil if they were positive towards it. Therefore, managers of retail chains should try to influence consumers mainly by providing them with detailed information about the benefits that they get from own-label olive oil, in terms of price, packaging and flavour, so that they form positive attitudes towards the product.Moreover, since trust in own-labels is vital in determining consumers' purchase intention, retail managers should try to instil in their clients confidence in their store brands. This means that retailers have to preserve a level of quality for all the products they sell under their own name, emphasizing on the selection and quality control processes.In addition to this, this study showed that consumers' trust in own-label products influences consumers' perceptions of the benefits they enjoy from own-label olive oil. More specific, the higher the level of trust consumers have in own-labels, the more the benefits consumers perceive they get from the specific product. This finding is quite interesting as it points out to the central role of trust in own-labels and strengthens the validity of the previous research conclusion.Furthermore, this study showed a direct effect of perceived economic situation on perceived benefits. The more pessimist consumers are about the total economic situation, the more they seem to appreciate the benefits of own-label olive oil. This logical finding suggests that retailers can be more efficient in their efforts to sell their brands whenever the economic conditions are bad.Next to the practical implications, this study also contributes to the academic body of knowledge, by examining consumers' perceptions about a premium own-label product. No previous study on this subject could be found. Following this, the contribution of this study is versatile. Researchers introduced a new model of purchase intention, combined several variables, and provided evidence for the model's goodness of fit. Research findings of previous studies on own-labels and olive oil were confirmed and a new set of factors that influence the purchase of a premium own-label product, was generated to be used in future studies.\n6. Limitations and further research: However, as any other field research, this study has certain limitations. First of all, the field research was carried out only in the area of Athens. The results could have been different if the sample was drawn from other areas of Greece, as outside Athens consumers may have different attitudes towards own-label products. In the same frame, the results may vary when the research is carried out in another country, where olive oil consumption is relatively low. Furthermore, the research findings are only valid for the specific model structure, as structural equation modeling provides results for a particular model specification.Further research could be carried out in order to test the specific model in other premium own-labels, such as organic products. Moreover, additional variables could enrich the model and lead to better predictions of consumers' purchase intention.\n",
"role": "user"
},
{
"content": "Purpose: - Each own-label product enjoys different levels of customer acceptance and perceived risk and requires different attention with respect to the different elements of the marketing mix. This paper explores the factors affecting consumers' intentions to buy an own-label premium food product. More specifically, this study focuses on the case of own-label olive oil.\nDesign/methodology/approach: - The research involved a review of the available literature on factors affecting consumer attitudes towards own-label products. For the collection of data, a survey was designed and conducted in the greater area of Athens, Greece. The research questionnaire was administered by means of personal interviews to 799 consumers.\nFindings: - Data analysis results, using structural equation modelling, showed that consumers' purchase intention is directly affected by consumers' attitudes towards own-label olive oil, which in turn are influenced by consumers' perceived benefits, economic situation, brand loyalty and trust. Moreover, the level of income has a direct negative impact on both consumer attitudes and purchase intention.\nResearch limitations/implications: - Limitations relate to the use of non-probability sample and the restricted geographical area of the field research.\nOriginality/value: - Considering that own-label products' quality is questionable, this study contributes to the academic body of knowledge, by examining consumers' perceptions about a premium own-label food product. By testing existing knowledge in a new context, the paper makes incremental contribution to the knowledge on own-label products, and provides insights for practitioners.\n",
"role": "assistant"
}
] |
The moderating effect of operator type: the impact of information technology (IT) expenditures on hotels' operating performance
|
[
"Operating performance",
"Franchise",
"Independently owned hotel",
"IT expenditures",
"Management companies",
"Operator type"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In the 21st century, the emergence of industry 4.0 featuring internet of things (IoT), artificial intelligence (AI), robotics and big data analytics has revealed its potential for transforming modern and future industries (Yan et al., 2017). The advances of information technology (IT) in the past century widely penetrated the hospitality industry (Buhalis and Law, 2008; Navio-Marco et al., 2018). New IT tools and systems have been increasingly adopted to improve and refine customer services and to enhance hotels' management processes (Ko et al., 2016). AI-based hotels have emerged in countries such as Japan (Henn na Hotel) and China (Flyzoo Hotel), where customers can check-in via mobile apps, interact with a smart speaker or robotic front desk and actively use a smart app (Ivanov and Webster, 2017; Ukpabi et al., 2018). Research shows that investment in IT has become a critical factor in enhancing hotels' performance (Wu and Lu, 2012) and can influence hotels' competitive advantage (Bilgihan et al., 2011) and overall performance (Hua et al., 2015; Melian-Gonzalez and Bulchand-Gidumal, 2016).\n2. Literature review: 2.1 Overview\n3. Methodology: 3.1 Data\n4. Results: Table 2 reports the basic statistics about the attributes of the hotels in this study. Hotel total revenue spans a wide range between $536,201 and $435,000,000, with an average of $10,200,000. Hotels' expenses on IT total expense reach the top of $42,000,000 a year with an average of $188,168. Hotels also exhibit a wide range in size from 41 rooms to 2,860 rooms. Hotels spend an average of $115,476 (with a maximum of $3,618,890) every year on loyalty program expenses. Accounting adjustments led to several negative numbers in the expense items, which do not qualitatively change the results from this study (sensitivity tests are carried out by deleting the negative numbers; results are omitted to save space).\n5. Discussions and implications: This study enhances the knowledge on how investments in IT impacts hotels' performance by investigating the moderating role of hotel operator type given controlling for a set of exogenous variables. Our findings paint a fuller picture about the complex decision-making process regarding investment in IT in the hospitality industry. Systematic control of contextual variables and the series of sensitivity tests also ensure the robustness of study results.\n6. Limitations and future research: Although the results from this study provide empirical evidence in support of the moderating role of operator type on the relationship between IT expenditures and hotel operating performance, this study does not directly map out a set of industrial practices on investment in IT. Therefore, it would be valuable to initiate a qualitative study that addresses the best practices and specific reasons that could contribute to the differential effects of operator type. Because of the lack of data on operational size, sophistication and complexity associated with operating type, this study does not include these variables in our models. Complementary surveys and interviews with key personnel could provide new insight on this matter. Furthermore, caution needs to be exercised when interpreting the generality of the results. Although the sample is large and of panel data, it is not from a completely random selection of the population. In addition, the data are restricted to the USA only. For future studies, collecting data from other countries could add to the robustness of this study. For example, smith travel research can be another excellent company to attain a data set with similar variables to explore this studied topic. Lastly, more detailed operator types can be explored further when the data become available.\n",
"role": "user"
},
{
"content": "Purpose: This study aims to examine how operator type moderates the relationship between hotel information technology (IT) expenditures and operating performance.\nDesign/methodology/approach: By adapting and extending O'Neill et al.'s (2008) and Hua et al.'s (2015) research, this study constructed an empirical model and tested proposed hypotheses, with Newey and West (1994) errors computed to accommodate potential heteroscedasticity and autocorrelation issues.\nFindings: Operator type moderates the impact of hotel IT expenditures on operating performance. In particular, it appears that the operator type of franchising exerts a stronger moderating effect compared with other operator types explored.\nPractical implications: This study, as the first of its kind, shows that the choice of operator type shapes how a hotel can effectively use IT expenditures to improve operating performance. This finding can be beneficial for hotel owners when making operator type decisions. In addition, operator type moderates the direct impact of IT expenditures on revenues and gross operating income. This study's results show that franchised hotels seem to use IT expenditures more effectively compared with independently owned hotels.\nOriginality/value: This study contributes both theoretically and practically to understand how operator type moderates the relationship between IT expenditures and hotel performance. The research outcome provides a more holistic view that governs the relationships between IT expenditures, operator type and operating performance.\n",
"role": "assistant"
}
] |
Development of a user-friendly regression model to evaluate carbon emissions of office buildings design in the subtropics
|
[
"Office building",
"Building design",
"Carbon emission"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nNomenclature: Abbreviations\n1. Introduction: Buildings account for approximately 40 per cent of total energy use and carbon emissions in the world (Bloom and Wheelock, 2011). A research conducted by USA National Research Council (2011) indicated that greenhouse gas emissions from buildings are the key factors to the rise in the surface temperature of earth. Wood and Agogino (2005) pointed out that the final environmental impacts of buildings are highly related to the early phases of building design. Change of design parameters in the early stage of a project involves very little cost but can reap huge benefits. Parmee (2005) stated that designers make the building design decision-based on intuition and past experience. Therefore, it is important for design teams to estimate building carbon emissions before the stage of the detail design and make necessary changes on building design when the level of carbon emission for the proposed building is found to be far above the baseline level (Basbagill et al., 2017). Building energy simulation is one of the common methods to investigate the quantitative relationship between carbon emissions and different building designs (Nguyen et al., 2014). However, because of significant resources, time and technical expertise, this method is not commonly used at the early design stage (Hopfe et al., 2005). In addition, a high degree of technical specification to characterise buildings, such as floor area, building materials and building system configurations, is required to achieve more accurate simulation results. De Souza (2012) discussed the fundamental differences in knowledge of building design and building physics throughout the whole building design. As such, these are reasons hindering the real application of building energy simulation programmes at the early design stage in practice. Therefore, there is a need to develop a simplified assessment model for evaluating carbon emissions for exploring design options at the early building design stage.\n2. Review on empirical carbon audit: A number of guidelines for empirical carbon audit in different countries and regions have been developed based on the International Standards ISO 14040 series and the Greenhouses Gas (GHG) Protocol (Lai et al., 2012). These guidelines aim to provide building owners/tenants/organisations with guidance to calculate the emissions of six GHGs (CO2, CH4, N2O, HFCs, PFCs and SF6). In general, GHG emissions in buildings are often categorised into three scopes (Lai, 2015). To ascertain the GHG emissions, the data and information required include:\n3. Material and methods: Figure 1 outlines the procedures and methods for developing a multiple linear regression to evaluate carbon emissions for office building design under the subtropics. The details of the procedures and methods will be discussed as follows.\n4. Results: 4.1 Regression models and accuracy\n5. Discussion: 5.1 Parametric study\n6. Conclusions: The work described in this paper presents the development of a regression model to estimate carbon emissions for office building design at a conceptual stage under the subtropics. The present study considers five parameters of building design, including building configuration, building envelope, design space conditions, building system configuration and occupant behaviour. A regression model was then developed using simulation results to estimate the carbon emissions of an office building at the operational stage. The SRCs were adopted to determine the impact on carbon emission with the respective design parameters. Such information provides valuable carbon-related insight at the early design stage. The results indicate that annual mean occupancy rate, equipment load, occupant density, lighting load and chiller COP are the top five influential design parameters affecting building carbon emissions under the subtropical climate.\n",
"role": "user"
},
{
"content": "Purpose: Design teams have difficulties in assessing building carbon emissions at an early stage, as most building energy simulation tools require a detailed input of building design for estimation. The purpose of this paper is to develop a user-friendly regression model to estimate carbon emissions of the preliminary design of office buildings in the subtropics by way of example. Five sets of building design parameters, including building configuration, building envelope, design space conditions, building system configuration and occupant behaviour, are considered in this study.\nDesign/methodology/approach: Both EnergyPlus and Monte Carlo simulation were used to predict carbon emissions for different combinations of the design parameters. A total of 100,000 simulations were conducted to ensure a full range of simulation results. Based on the simulation results, a regression model was developed to estimate carbon emissions of office buildings based on preliminary design information.\nFindings: The results show that occupant density, annual mean occupancy rate, equipment load, lighting load and chiller coefficient of performance are the top five influential parameters affecting building carbon emissions under the subtropics. Besides, the design parameters of ten office buildings were input into this user-friendly regression model for validation. The results show that the ranking of its simulated carbon emissions for these ten buildings is consistent with the original carbon emissions ranking.\nPractical implications: With the use of this developed regression model, design teams can not only have a simple and quick estimation of carbon emissions based on the building design information at the conceptual stage but also explore design options by understanding the level of reduction in carbon emissions if a certain building design parameter is changed. The study also provides recommendations on building design to reduce carbon emissions of office buildings.\nOriginality/value: Limited research has been conducted to date to investigate how the change of building design affects carbon emissions in the subtropics where four distinct seasons lead to significant variations of outdoor temperature and relative humidity. Previous research also did not emphasise on the impact of high-rise office building designs (e.g. small building footprint, high window-to-wall ratio) on carbon emissions. This paper adds value by identifying the influential parameters affecting carbon emissions for a high-rise office building design and allows a handy estimate of building carbon emissions under the subtropical conditions. The same approach may be used for other meteorological conditions.\n",
"role": "assistant"
}
] |
The evaluation of core competencies in networks: the network competence report
|
[
"Intangibles",
"Intellectual capital reporting",
"Intellectual capital disclosure",
"Interventionist approach",
"Monetary evaluation",
"Network core competencies"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Over recent decades, there has been a rapid global transition from an industrial to a knowledge-based economy, in which wealth is created by developing and managing knowledge (Andriessen, 2004a, b; Ricceri and Guthrie, 2009; Dumay and Garanina, 2013), commonly termed \"intellectual capital\" (IC). Numerous practitioners (such as Skandia and KPMG) (see Mouritsen et al., 2001; Andriesson, 2005) and scholars (see, for instance, Sveiby, 1997, 2001; Petty and Guthrie, 2000; Bontis, 2001; Andriesson, 2005) have proposed a range of methods for measuring and evaluating IC. However, despite the growing strategic relevance of this research topic, there remains limited understanding of how organisations manage, measure, and report their IC (see, for instance, Guthrie, 2001; Marr, 2005; Chiucchi, 2013). In particular, the analysis and management of core competencies (CCs), which serve as an appropriate lens to identify company IC (Andriessen, 2004b), underlines some deficiencies.\n2. Literature review: This section summarises the theoretical topics relevant to this paper, which intersect the literature on IC and on inter-organisational collaboration. The paper's originality relates to the intersection in the literature between IC disclosure and the resource based view (RBV), and its contribution to IC disclosure in inter-organisational collaboration.\n3. Research methods: Research objectives\n4. Findings: The NCR identified and valued 35 CCs, the details of which are provided in Figures A1-A3 and Table AI. Globally, their value exceeds 35 million euros (Figure 6) and represents the value of CCs not captured by the \"traditional\" balance sheet.\n5. Discussion: The sum of the total assets of the sampled companies is approximately 135 million euros (comprising nearly 100 million euros in total \"traditional\" balance sheet assets plus approximately 35 million euros of CCs). The intangible assets of the \"traditional\" balance sheet are worth around 3.3 million euros; thus, they represent less than 10 per cent of the monetary valuation of the CCs. From the perspective of integrated reporting[8], the total value of CCs (Figure 6) could be considered as an additional accounting item, which can be added to the assets side of an ad hoc balance sheet, offset by a new accounting item in the shareholders' equity called \"CC surplus\".\n6. Conclusions: This paper has presented a model to depict, disclose, and report the value of CCs in a network of companies, utilising the NCR as a statement to provide reliable information enabling:\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to propose a model to disclose, report, and manage intellectual capital (IC) in a network of companies. To this end, it provides a monetary evaluation of core competencies (CCs), which may be defined as a bundle of various types of intangibles, aggregating their value into a network statement, called a network competence report (NCR).\nDesign/methodology/approach: The paper utilises the interventionist approach. The intervention was conducted by the authors and studied through joint reflections on documentation from meetings and individual, semi-structured interviews.\nFindings: The NCR makes IC more transparent, thereby allowing companies and network managers to assess the strengths and weaknesses of CCs with a consequent potential insight into their potential earnings.\nResearch limitations/implications: This method is labour-intensive, especially in its first application, and the data collection requires considerable company involvement. The interventionist approach may have influenced the empirical results, which may be affected by subjectivity. As the paper involves a single network, care should be taken in generalising its empirical evidence.\nPractical implications: In making IC management more effective, the NCR is valuable for academics, management, political authorities and, more generally, for a network's stakeholders. The NCR is a tool for internal and external communication purposes, creating the conditions to mobilise IC. The proposed model supports the diagnosis of networks by providing CC maps and assessments relevant to their governance and competitiveness. The NCR depicts company and network CCs, allowing intertemporal comparisons that facilitate understanding of the effectiveness of the network's actions and the importance of belonging to it.\nOriginality/value: This paper represents a first attempt to evaluate, in monetary terms, CCs in a network. Its value lies in its practical implications. Moreover, the paper investigates IC in applied terms, contributing to reducing the gap between theory and practice.\n",
"role": "assistant"
}
] |
Organisational culture and quality management system implementation in Indonesian construction companies
|
[
"Indonesia",
"Performance",
"Construction management",
"ISO 9000 series",
"Organizational Culture Assessment Instrument",
"Quality management system"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The study of the organisational culture in the construction industry is still in the stage of debate (Oney-Yazici et al., 2007). Despite the complexities involved in measuring the culture of the construction industry (Tijhuis and Fellows, 2012), this culture is regarded as being worthy of research, especially in relation to the organisational culture needed to support quality management systems (QMSs) (Koh and Low, 2008; Watson and Howarth, 2011) and to improve organisational effectiveness, and therefore, organisational performance (Coffey, 2010; Cheung et al., 2011).\n2. Relationship between organisational culture and QMS implementation in construction companies: Culture addresses people matters, and thus becomes the crucial factor in effective QMS implementation (Corbett and Rastrick, 2000; Irani et al., 2004; Koh and Low, 2008). This means that, with recognisable adaptable strong organisational culture profiles, a construction company's QMS can be efficiently and effectively implemented and continually improved, to provide a strategic management system that improves a company's performance (Coffey, 2010; Cameron and Quinn, 2011). A number of studies have been undertaken in analysing how a QMS can be successfully implemented and identifying the potential benefits to be gained by a construction company that achieves effective implementation of such a system. Corbet and Rastrick (2000), Irani et al. (2004) and Coffey (2005, 2010) agree that there is a definite relationship between organisational culture and quality in construction projects. It has also been noted that the nature of the culture is a major determinant factor for such success, and that this, in turn, is related to the successful implementation of a QMS. Koh and Low (2008) have referred to this relationship as \"mutually reinforcing\", and by this they mean that if the organisational culture is good, the QMS will be applied appropriately, which leads to good products and services.\n3. Research methods: This research employed a questionnaire survey of construction industry practitioners, to collect empirical data regarding the assessment of the Indonesian construction companies' organisational culture profiles and the examination of the QMS-ISO 9001:2008 implemented by the contractors. It was followed by an analysis of the influence of different organisational culture profiles on the companies' QMS implementation. For questions pertaining to organisational culture assessment, the OCAI developed by Cameron and Quinn (2006) was utilised. Questions regarding the examination of the QMS implementation were based on studies in the literature, which consist of current problems within the quality system, the implementation of QMS-ISO 9001:2008 principles and elements, and the companies' business performance while implementing their QMS. This paper focuses on the organisational culture aspects and their influence on the QMS implementation. The assessment on the implementation of QMS was discussed in Willar et al. (2015).\n4. Discussion: 4.1. Assessment of Indonesian contractors' culture profiles\n5. Conclusion: Based on the analysis of the results and the discussion aforementioned, the following conclusions can be made:\n6. Research implications: This research study has implications that can be viewed from three different perspectives: a contribution to the body of knowledge, a contribution to the construction industry; and a contribution for policy. For the body of knowledge, the results reported contribute to and enrich the literature on organisational culture assessment and QMS implementation in construction organisations, as well as contribute to the knowledge in the area of Indonesian construction's organisational culture and QMS development. This study has also demonstrated the methods for analysis of the relationships between the organisational culture profiles (independent variable) and QMS principles-elements (dependent variable), for which the original Cameron and Quinn's analysis model was used. By framing the culture profiles as a categorical variable (with QMS principles-elements being a continuous variable), the study shows additional ways for analysing the relationships between the two variables.\n7. Research recommendations: Several recommendations have been provided for future research. First, it is recommended that future research be undertaken to confirm the dominance of the mixed culture profile found in this study. Second, in this study, there have been examinations of the relationships between the organisational culture and QMS principles-elements variables, and of the relationships among the QMS variables. However, a further analysis should be considered to simultaneously examine the relationships among all variables. A SEM analysis could be used to undertake such data analysis. Third, factors such as corruption-collusion-nepotism and unhealthy competition practices among contractors, might be considered potentially influencing the effective implementation of construction companies' QMS. The influence of these factors would, therefore, be an appropriate topic for future research. Finally, different countries have different and specific national cultures, different legal guidelines, and different political and business environments, which might influence an organisations' culture. This study might have potential value for the examination of culture-based QMS implementation in construction organisations in other emerging and developing countries and on a wider global basis.\n",
"role": "user"
},
{
"content": "Purpose: The review of literature found that there is a significant correlation between a construction company's organisational culture and the company quality performance. The purpose of this paper is to assess the organisational culture profiles of Indonesian construction companies, and to examine the influence of the companies' organisational culture profiles on their quality management systems (based on QMS-ISO 9001:2008) implementation. Prior to conducting the examination, there are examinations of the relationships among the quality management system (QMS) variables.\nDesign/methodology/approach: The methodology employed a survey questionnaire of construction industry practitioners who have experience in building and civil engineering works. The Organizational Culture Assessment Instrument was selected due to its suitability in assessing organisation's underlying culture.\nFindings: Within the examination among the QMS variables, it was found that problematic issues associated with the implementation of QMS-ISO 9001:2008 in Indonesian construction companies can affect the implementation of the QMS and contribute to the lower level of companies' business performance. It was also found that there is no significant relationship between the QMS implementation and the companies' business performance. By using the Competing Values Framework diagram, it was found that most of the construction companies' organisational culture is characterised by a Clan type which is reflected in how employees are managed, how the organisation is held together, and how the organisation's success is defined; the leadership style is Hierarchy-focused, while the organisation's strategy is Market type. It was also found that different culture profiles have different influences on the QMS implementation.\nOriginality/value: A strong mixed Hierarchy and Market culture needs to be developed within the construction companies in Indonesia, as the driver to support proper and successful implementation of their QMS in order to enhance business performance in a quality performance-oriented Indonesian construction industry.\n",
"role": "assistant"
}
] |
Modelling wholesale distribution operations: an artificial intelligence framework
|
[
"Demand forecasting",
"Supply chain management",
"Artificial neural networks (ANNs)",
"Multiple neural networks (MNNs)",
"Price determination",
"Wholesale distribution"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In the wholesale distribution of consumer goods, the role of a wholesaler is to provide its customers with a broad assortment of products from different suppliers, which in most cases are large companies managing business relationships exclusively with their key distributors (Figure 1). In such a system, the wholesaler renders the service of making the desired products available for purchase at a specific time, which creates value for the customer (Ehrenthal et al., 2014). The primary focus of the wholesaler is to maximize the difference between what retailers will pay and what vendors will accept as payment for the products they offer. The distributors typically have a strong bargaining power, which enables them to obtain good contractual conditions with industries, with profit margins up to 6 percent (Holweg et al., 2016; Wyman, 2014).\n2. Literature review: In supply chain management, a typical approach to demand forecasting involves the use of statistical tools, in the form of exponential smoothing, time series regression and autoregressive and integrated moving average (ARIMA) models (Fildes et al., 2009; Silva Filho et al., 2013; Babai et al., 2013; Khosroshahi et al., 2016; Huber et al., 2017). These models all assume linear relationship between independent and dependent variables, which is not always the case for demand forecasting in retail distribution (Gruen and Corsten, 2007). ANNs fall within the class of non-linear methods and are, therefore, extensively used in the demand forecasting process (Bala, 2012; Cho and Kim, 1995; Kittler et al., 1998). ANNs are mathematical tools inspired by biological neural networks that provide, for a given problem, solutions similar to those formulated by the human mind. Multiple neural networks (MNNs), i.e. the connection of more ANNs, were found to be particularly suitable for improving the accuracy of the model prediction (Nguyen and Chan, 2001). Connecting more ANNs means considering the input data of the central net to be the output of the preceding networks.\n3. Problem formulation: 3.1 The context\n4. ANN modeling and software implementation: The effect of the factors listed above on the selling price and retailers' demand is non-linear and hard to describe using mathematical relationships (Wei and Li, 2015). Moreover, besides these factors, the retailer's demand is also characterized by an intrinsic variability (Islek and Oguducu, 2015). These considerations motivate the adoption of ANNs.\n5. Application of the ANN framework to the case study: 5.1 Data collection\n6. Conclusions: This study has proposed an ANN framework for predicting retailer's demand in a wholesale supply chain, with the ultimate aim to help the wholesaler decrease the occurrence of OOS situations. The proposed approach makes use of two ANNs. ANN1 takes six factors as inputs and returns the selling price of the products sold by the wholesaler as output, while ANN2 takes five factors as inputs and returns an estimate of the retailer's demand as output. This output can be used by the wholesaler to issue purchase orders to its suppliers.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to propose an artificial intelligence-based framework to support decision making in wholesale distribution, with the aim to limit wholesaler out-of-stocks (OOSs) by jointly formulating price policies and forecasting retailer's demand.\nDesign/methodology/approach: The framework is based on the cascade implementation of two artificial neural networks (ANNs) connected in series. The first ANN is used to derive the selling price of the products offered by the wholesaler. This represents one of the inputs of the second ANN that is used to anticipate the retailer's demand. Both the ANNs make use of several other input parameters and are trained and tested on a real wholesale supply chain.\nFindings: The application of the ANN framework to a real wholesale supply chain shows that the proposed methodology has the potential to decrease economic loss due to OOS occurrence by more than 56 percent.\nOriginality/value: The combined use of ANNs is a novelty in supply chain operation management. Moreover, this approach provides wholesalers with an effective tool to issue purchase orders according to more dependable demand forecasts.\n",
"role": "assistant"
}
] |
Not just a pretty picture: visual literacy education through art for young children
|
[
"Information literacy",
"Education",
"Schools",
"Literacy",
"Museums",
"Children (4-6 year old)",
"Visual literacy"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Contemporary culture is increasingly relying on visual communication offered through \"the visually rich web, photo dependent social networks, video saturated media, and graphically sophisticated entertainment and gaming\" (Metros, 2008, p. 102). At a time when young people spend more time with image-based media than ever before (Kaiser Family Foundation, 2010), it is important to teach them to interpret and create visual messages early in life (National Art Education Association, 2016; National Association for the Education of Young Children, 2016; Common Core State Standards Initiative, 2016).\nLiterature review: Children learn to understand visual information much earlier than they learn to interpret textual information. By the age of one children recognize familiar faces and objects, and by the age of three they start using visuals to communicate with and about the world (Bamford, 2003). As children grow older, they face the need to make critical judgments about information presented in a visual form as well as to effectively communicate information through visual forms. Addressing these challenges requires a higher level of visual literacy to understand visual elements that make an image (syntax) and represent its meaning (semantics). This higher level of visual literacy does not come naturally and needs to be developed (Ausburn and Ausburn, 1978; Rice, 1988). For example, researchers suggest that young children can independently identify objects, colors, and a few other basic visual elements (Gardner, 1970; Eckhoff, 2010; Yenawine, 2003), but require training to understand more complex elements, such as perspective, color temperature, technique, or style (O'Neil, 2011; Callow, 2008; Gardner, 1970; Alper, 1996). The importance and benefits of developing visual literacy in young children has been recognized by the National Association for the Education of Young Children and the National Art Education Association, yet there are very few programs aimed at developing basic visual literacy skills in pre-school children. Since our study explored the groundwork for the development of visual literacy skills in young children, we reviewed relevant literature on the common elements that constitute visual literacy, the visual literacy studies on children, and existing visual literacy programs for children.\nMethods: Very little is known about young children's knowledge of visual literacy elements, their ability to comprehend newly introduced visual literacy concepts as well as the support conditions that are currently available for the development of visual literacy programs. We designed a study to examine these topics by addressing the following research questions:\nFindings: Pre- and post-instruction descriptions of visual artwork (RQ1 and RQ2)\nParents' and educators' perspectives (RQ3 and RQ4): A total of nine parents completed the online questionnaire about their family's museum visitation routines and general attitudes toward visual literacy. Table IV contains a summary of parent responses.\nDiscussion: Image selection\nConclusion: Despite an increase in the amount of time children spend with image-based media (Kaiser Family Foundation, 2010), we have a limited understanding of their abilities to interpret visual messages. In an effort to better understand children's readiness for visual literacy instruction, we interviewed 17 children aged four- to six-years old about three impressionist paintings. Overall, children reacted positively to the images and were engaged in discussions about them. Children exhibited extensive knowledge of simple visual literacy elements (e.g. color, shape, line), and generally had no difficulties learning more nuanced concepts (e.g. primary and warm/cool colors, use of shapes to construct objects). Children's initial understanding of more abstract elements (perspective and salience) was limited, but most children were able to better comprehend these elements after instruction. Children's comprehension of the visual elements that were used in this study was further confirmed when they were able to identify these elements in their surroundings. These findings suggest that young children are ready to receive instruction on visual literacy elements using art images. Results from questionnaires given to children's parents and teachers indicate a strong interest in and support for visual literacy programs. Based on our study's findings, we offer the following recommendations for educators, information curators, technology developers, parents and others involved in the development of literacy programs for young children:\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to understand young children's knowledge of visual literacy elements as well as their ability to comprehend newly introduced visual literacy concepts. The study also examined existing support for visual literacy programs from parents and educators.\nDesign/methodology/approach: The study explored the knowledge of basic visual literacy elements of young children enrolled in two private schools in the New York City metropolitan area. The authors interviewed 17 children, aged four to six years old, about fine art paintings using a semi-structured interview format. Children's responses were qualitatively analyzed to determine their initial level of visual literacy and their ability to learn and retain the concepts of visual literacy after receiving basic instruction. The children's educators and parents completed online questionnaires that were quantitatively analyzed to determine their level of support for visual literacy programs.\nFindings: The findings show that young children exhibited extensive knowledge of simple visual literacy elements (color, shape, line), and limited understanding of more abstract elements (perspective and salience). Children's knowledge of visual elements improved after instruction. Parents and educators expressed support for incorporating visual literacy instruction in early childhood education.\nResearch limitations/implications: The study relied on a sample of children and adults drawn from two private schools. The sample's demographics might have affected study findings. More studies are needed using a larger and more diverse sample.\nPractical implications: The study suggests that young children are ready to receive instruction on visual literacy elements using art images. Children reacted positively to the images and were engaged in the discussions about them, supporting the use of fine art paintings as an instrument to introduce visual literacy concepts to young children. Survey of children's parents and teachers indicated strong interest in, and support for such programs.\nSocial implications: With the increase of visual information production and consumption, it is important to introduce visual literacy early in life. The study advances research in methods for developing visual literacy instruction for young children.\nOriginality/value: There are no previously reported studies that have examined pre-kindergarten children's knowledge of basic visual literacy elements and reactions to visual literacy instruction.\n",
"role": "assistant"
}
] |
An investigation of the status of the Malaysian construction industry
|
[
"Construction industry",
"Lean production",
"Malaysia"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nAn overview of the Malaysian construction industry: Malaysia began develop its construction industry since independence. According to Lewis (1955), more than half of capital formation consists of work in construction. Hence, the expansion of capital is a function the rate at which the construction industry can be expanded. This can be seen in the initial economic plan (1956-1960) where it was basically a development expenditure plan. The primary concern was developing the infrastructure during independence because of its inadequacy. In order for the nation's economy to prosper, the construction industry has to be developed first for the economy to take a one step further.The Malaysian construction industry (MCI) is generally separated into two areas. One area is general construction, which comprises residential construction, nonresidential construction and civil engineering construction. The second area is special trade works, which comprises activities of metal works, electrical works, plumbing, sewerage and sanitary work, refrigeration and air-conditioning work, painting work, carpentry, tiling and flooring work, and glass work.Role of the Malaysian construction industry\nRegulatory framework and registration of contractors by CIDB: Construction companies in Malaysia must be registered before executing any construction work. The CIDB established in 1994 registers all construction companies on behalf of the Malaysian Government. CIDB's headquarters are in Kuala Lumpur (www.cidb.gov.my). One of the functions of CIDB is to regulate and register construction firms in seven grades from G1 to G7. These grades depend on the construction company experience, financial status and personnel capability. The registration of construction companies is on a one to three year basis and is granted to qualified construction companies under the relevant grade that defines the limits of the value of work the company is allowed to undertake. Table VII showed the value of work for which approved construction companies can tender. Construction companies can apply for promotion to a higher grade. Other functions of CIDB are to provide training, to promote and simulate development, improvement and expansion of the construction industry, to advise the government on matters affecting the construction industry, to promote, stimulate, and assist in the export of services relating to the industry, and to initiate and maintain construction industry information systems. A total of 64,194 construction companies are registered with CIDB in different grades as shown in Table VIII and Figure 1.These grades (G1-G7) are classified further as specialist category building construction (B), civil engineering construction (CE), mechanical and electrical construction. These specialist categories are further subdivided into subspecialties. Building construction for example is subdivided has 19 subspecialties from B01 to B19, civil engineering construction has 20 subspecialties from CE01 to CE22. Construction mechanical has 15 subspecialties from M01 to M15 and finally electrical has ten subspecialties from E01 to E10.As per a CIDB ruling, a construction company can be registered in more than one category and subspecialty within one grade (tendering limit) as far as it satisfies the registration requirements.\nChallenges in the Malaysian construction industry: Improvement in the operations\nStrategic directions for the construction industry: Over the past decade, the MCI has contributed significantly to the economy as an enabler of growth to other industries. Although it accounts for <5 percent of GDP, the industry is an essential growth enabler because of its extensive linkages with the rest of the economy, for example, the manufacturing industry (such as basic metal products and electrical machinery). In 1998, when the construction industry experienced a sharp downturn, basic metal incurred some 35.6 percent drop in output (Malaysia Market watch 2005). Nevertheless, recent events, such as globalisation and the saturation of business prospects in Malaysia, have contributed to the sudden plunge in the growth of the industry. At this juncture, it is clear that fundamental changes are now needed in order for the construction industry to achieve greater efficiency and address the overarching challenges, as well as seize new opportunities in liberalized global markets.The MCI has reached a critical juncture (CIMP, 2005) in its development. The vast mega projects that sustained the industry over the past decade have been either completed or reprioritised. In addition, the dampening effect of the Asian financial crises and the subsequent economic prudence is slowly taking its toll on the industry. At the same time, the growth of countries abroad - from China to Kazakhstan - is creating opportunities that, with some efforts, would be apt for the MCI to garner.Internally, there are considerable issues across the construction industry's value and supply chain. The challenges discussed above have retarded the forward movement of the construction industry. Thus, the current situation necessitates a comprehensive framework to ensure that a strong foundation will be laid and that construction players will be well positioned to complete globally.The critical success factors are the elements that are imperative to the success of the achievement of the strategic thrust and strategies. These are productivity, quality, human resources, knowledge, innovation, environment practices, industry sustainability, and professionalism (CIMP, 2005).\nTime to change: Traditional ways of performing and managing construction processes face unprecedented challenges. The growing competition forces construction organizations to rethink their construction for improving productivity, quality, and efficiency (Karna and Jonnonen, 2005). The construction industry may also benefit from best practices in other industries - for example electricity (Chau, 2009), breweries (Goncharuk, 2009), healthcare (Duggirala et al., 2008) or new processes - for example quality function deployment (Miguel and Carnevalli, 2008).The MCI has completed numerous projects but is not cost, quality and time effective. Additionally the Malaysian government's investments in construction jobs has been decreased substantially (Abdul Rahman et al., 2005) in the last few years and the project owners are working on construction cost reductions to implement construction projects with client(s) satisfaction, effectiveness, and within limited resources. These are the reasons why today's MCI must implement an aggressive transformation plan for the future. And in order to face the challenges of the twenty-first century, the MCI must compete through continuous productivity improvement, more value-added operations, enhanced quality and there is no doubt that substantial improvement in operational strategies are possible (Zaini Omar, 2000).The MCI sooner or later has to innovate new operational and productivity instruments so as to have a competitive advantage in local and global markets. The industry needs to prepare for paradigm shifts to improve its competitiveness that can be achieved by using good practices, advanced construction techniques and optimise resources utilization. Increasing productivity is the greatest untapped area for improvement in the construction industry. Companies that embrace new technologies, innovative processes, collaborative partnering, improved safety and reduction in litigation costs through contract arrangements will have a significant advantage over those that continue with past practices. Only with enhanced capabilities and capacity can the industry withstand challenges in good and bad times (Abdul Rahman et al., 2005; Zaini Omar, 2000).\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to understand the production processes utilized by the Malaysian construction industry (MCI).\nDesign/methodology/approach: - The paper follows an earlier work published in this journal in which the authors analyze the global construction industry focusing on the evolution of lean production systems.\nFindings: - The future of construction is exciting. International markets, innovations in technology, and techniques of construction project management, new relationships across disciplines are all avenues of development. To make the best use of their opportunities, the MCI must invest in research and development (R&D), undertake public education about the field and develop new methodologies to improve operational performance in construction.\nOriginality/value: - This paper provides a detailed analysis of the MCI, it's impact on the economy, key success factors, struggles and problems that the industry currently faces. Analysis of key functional areas like operations management, R&D, and occupational safety and health administration provide insights into improvement.\n",
"role": "assistant"
}
] |
Supplier collaboration and new product performance: a contingency model
|
[
"Taiwan",
"New products",
"Supplier relations",
"Manufacturing industries",
"Supplier collaboration",
"New product performance",
"Technological capacity",
"Promotion capacity",
"Technological substitution"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: As market competition continues to intensify, many firms must collaborate with their suppliers to achieve technological innovation. Utilizing supplier expertise and knowledge of components and parts may help firms build a broad knowledge-sharing network and improve product design (Hong et al., 2004; Oh and Kim, 2005) or create new methods for developing new products of a higher quality (Tsai, 2009). Also, supplier involvement may help firms quickly identify potential technical problems that hinder design changes or enable firms to conduct concurrent engineering (Hilletofth and Eriksson, 2011). In addition, firms may benchmark best practices/processes with suppliers, thereby enhancing the ability to respond to customers' needs and wants (Stank et al., 2001). Overall, supplier involvement can therefore speed up the development of new products and achieve a competitive advantage (Langerak and Hutlink, 2008; Lau, 2011; Oh and Rhea, 2010).Supplier collaboration (SCL) in the new product development process refers to the cooperation of a firm with its key supplier(s) through a collaborative agreement. An increasing number of studies illustrate the impact of SCL on new product performance in terms of sales (Belderbos et al., 2004; Faems et al., 2005; Freel, 2003; Ledwith and Coughlan, 2005; Miotti and Sachwald, 2003). These studies theoretically hypothesize a positive link between SCL and new product performance, but empirical evidence has so far produced ambiguous results. Miotti and Sachwald (2003) and Faems et al. (2005) found that SCL has a positive impact on new product performance. However, the findings of Freel (2003) and Belderbos et al. (2004) show an insignificant relationship between SCL and new product performance. In addition, Ledwith and Coughlan (2005) found that SCL has no effect on new product performance. These studies were conducted using the Community Innovation Survey (CIS) database with a primary focus on the manufacturing sectors of European countries (French, the UK, Germany, and Spanish). Several control variables were used, including firm size (FS), research and development intensity, and industrial sectors. Thus, the conflicting results of these studies may logically result from ignoring the factors that affect the relationship between SCL and new product performance. In other words, understanding the relationship between SCL and new product performance may require the additional consideration of moderating variables. As such, this study aims to investigate the factors affecting the SCL-new product performance relationship.Existing literature suggests that the successful integration of external knowledge may depend on the firm's internal conditions (Jones et al., 2001). The resources required for successful innovation differ in emphasis according to the level of perceived environmental uncertainty (Droge et al., 2008). Past research suggests that the new product development process and its outcomes depend on a firm's decision makers' level of perceived the changes of external environment (Calantone et al., 2003; Droge et al., 2008). The current study considers three factors as moderating variables: technological capacity (TC), promotion capacity (PC), and technological substitution. The first two are part of the firm's internal conditions, and the last one represents changes in the technological environment.From the organizational learning perspective (March, 1991), inter-organizational collaboration is a learning process of external knowledge. During this process, the level of accumulated technological knowledge is associated with the effectiveness of external learning. Firms having a great deal of technological knowledge can effectively identify and utilize external knowledge, thus improving the effect of external linkage on product innovations (Jones et al., 2001). PC refers to a firm's ability to sense market situation and effectively execute the promotional activities of a new product launch (Parry and Song, 1994). Firms with a high level of PC may contribute market insights to the SCL for product innovation (Wheelwright and Clark, 1992). Furthermore, as technological change increases, incorporating new technological knowledge into product development is becoming more uncertain. Past research suggests that environmental factors external to a firm can neutralize or dissipate a resource's importance on competitive advantage (Song and Montoya-Weiss, 2001). As such, technological change may affect the SCL-new product performance relationship. Surprisingly, relatively few authors have examined these issues.Therefore, the purpose of this study is to fill the gap in existing knowledge by investigating how TC, PC, and technological substitution affect the impact of SCL on new product performance. This study contributes to existing literature in two ways. First, it provides a moderating framework that links SCL to the success of innovative products. Previous studies only address the relationship between SCL and new product performance. In contrast, this study presents the influences of SCL at different levels of TC, PC, and technological substitution. This study delves deeper into the impact of SCL on new product performance than previous studies. The results of this study not only increase one's understanding of the relationship between SCL and new product performance, but also explain why some firms attain better new product performance than others with the same level of SCL. Second, this study extends the roles of TC, PC, and technological substitution. While previous research views these three factors as determinants of new product performance, this study highlights their moderating roles in linking external learning with SCL and new product performance. The results of this study provide valuable guidance for firms pursuing technological innovation through SCL.The remainder of this paper is organized as follows. Section 2 presents the research model and the research hypotheses. Section 3 introduces the research methods, including variable measurements and the database used in this study. Section 4 presents the results and discussions. Section 5 summarizes the results and discusses their implications for theory and managerial practice, providing potentially interesting issues for future research.\n2 Conceptual model and research hypotheses: The conceptual model of this study grounds on the interactive fit argument, one strand of contingency theory (Venkatraman, 1989). The view proposes that the firm's performance is attributable to a match between its strategic behaviors and the internal and external conditions (Atuahene-Gima and Murray, 2004; Zeithmal et al., 1988). Specifically, in new product development context, extant research has suggested that technological level and promotion capability are firms' internal conditions which moderate the linkage between innovation outcomes and their antecedents (Hsieh et al., 2008; Ortega, 2010; Tsai et al., 2011). On the other side of the same coin, the product development contingency theory suggests that technological changes moderate the relationships between intra-organizational factors and product innovation outcomes (Calantone et al., 2003; Droge et al., 2008). As such, this study illustrates the conceptual framework as Figure 1. The framework is a contingency model which describes that the effect of SCL on new product performance is influenced by three factors: TC, PC, and technological substitution. The first two are a firm's internal conditions, and the last one is its external factor. The following section describes the underlying rationale of specific hypotheses concerning how these three variables affect the relationship between SCL and new product performance.Note that the articles cited in the following sections use the concept of capacity as the basis of their discussion, but their arguments seem to deal with the concept of proficiency. The two concepts are very similar. Indeed, in these studies, capacity refers to the amount of a space, the ability to do something, or the power to acquire knowledge (Lane and Lubatkin, 1998; Tsai et al., 2011), while proficiency is the state or quality of being proficient (Langerak et al., 2004; Hsieh et al., 2008). Although these are different concepts, the degree of proficiency should depend on the level of capacity.2.1 Technological capacity\n3 Research method: 3.1 Data source\n4 Analyses: 4.1 Description statistics\n5 Conclusions: 5.1 Summary and implications\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to present a contingency model to examine how technological capacity, promotion capacity, and technological substitution affect the supplier collaboration-new product performance relationship.\nDesign/methodology/approach: - This study uses data from a Government survey of technological innovation. A total of 201 machinery/electronics equipment manufacturing firms in Taiwan comprise the sample. A Tobit regression analysis is adopted to analyze the data.\nFindings: - It is found that technological capacity and promotion capacity enhance the effect of supplier collaboration on new product performance. Technological substitution mitigates the relationship between supplier collaboration and new product performance.\nResearch limitations/implications: - The sample of this study just focuses on machinery/electronics equipment manufacturing firms. The new insights of this study imply that by failing to consider the contingency roles of technological capacity, promotion capacity, and technological substitution, previous research may have assumed away the conditions external and internal to a firm and thus may have reached an oversimplified view of the link between supplier collaboration and product innovation performance.\nPractical implications: - Firms can improve the effect of supplier collaboration on product innovation by enhancing their technological capacity and promotion capacity.\nOriginality/value: - The paper makes contributions to explain why some firms attain better new product performance than others under the same level of supplier collaboration.\n",
"role": "assistant"
}
] |
The impact of impression management over time
|
[
"Impression management",
"Apologies",
"Ingratiation",
"Justifications",
"Self-promotion"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Employees use impression management (IM) to create, maintain, or protect an image held by other individuals (Rosenfeld et al., 1995). IM has relevance to a number of important streams of organizational research, including political skill (Harris et al., 2007), organizational citizenship behavior (OCB) (Bolino, 1999), and organizational politics (Ferris and Treadway, 2012). IM influences important outcomes, including hiring decisions, performance evaluations, and career advancement (Bolino et al., 2008). In general, research has focused on how individuals use assertive IM tactics of ingratiation and self-promotion, or defensive tactics of apologies and justifications, to enhance their image at work (Bolino et al., 2008). Given that most employees seek to be seen as likable and effective by supervisors, much of this work has been especially concerned with how IM relates to supervisor ratings of employee likability and performance.\nThe impact of IM used repeatedly or over time: As noted, previous studies have focused on the short-term effects of IM (Bolino et al., 2008). It is likely, though, that supervisors respond differently to initial attempts at IM than they do to subsequent attempts. Graen and Scandura's (1987) dyadic theory of role emergence describes how relationships between supervisors and subordinates evolve over time. Given the focus of our research question, this theory is useful for understanding how the repeated use of IM over time may influence ratings of likability and performance in the context of supervisor-subordinate relationships.\nStudy 1: Method\nResults: IM tactics versus control groups\nDiscussion: The findings suggest that repeated use of all four IM tactics influences supervisor perceptions. Ingratiation was quite effective, as supervisor performance appraisals improved relative to the control group with each trial in which the tactic was used. In addition, ingratiation also resulted in consistently higher levels of supervisor liking. Self-promotion tactics also had a positive effect on both supervisor liking and performance evaluations, as supervisors consistently rated self-promoters as more likable and better performing than those who did not manage impressions. Neither the repeated use of the defensive IM tactics of justifications nor apologies influenced supervisor liking. Apologies did, however, initially have a positive effect on ratings of performance, but this positive effect weakened over time. Finally, the use of justifications resulted in higher supervisor ratings of performance in each trial, but the size of this positive effect was not consistent over time.\nStudy 2: Method\nResults: Before analyzing our hypotheses, the IM items from Time 1 were factor analyzed to ensure they reflected the four IM tactics. First, an exploratory factor analysis using maximum-likelihood estimation and specifying four factors and a promax rotation was used to examine the factor structure of the items. The results indicated that apologies and justifications did not load cleanly onto two different factors; therefore, we specified a three-factor solution. This solution yielded items that cross-loaded onto multiple factors, so these items were deleted. The remaining 13 items were re-analyzed using confirmatory factor analysis (CFA) with maximum-likelihood estimation. The CFA indicated that a three-factor model fit the data reasonably well (kh2=72.01, df=62, p=0.18; CFI=0.94; TLI=0.92; RMSEA=0.08), and all of the items loaded significantly onto their specified factor. The items, factor loadings, and inter-factor correlations appear in Tables III and IV.\nDiscussion: The findings with regard to defensive IM tactics suggest that the stage of the supervisor-subordinate relationship matters. Specifically, the relationship between defensive tactics of IM and evaluations of likability and performance was stronger at Time 1 than at Time 2. Further, although the findings were not completely supportive of this idea with regard to ingratiation, the relationship between ingratiation and likability was significant at Time 1, but not at Time 2 (the difference was marginally significant). Taken together, the results of Study 2 are most supportive of the idea that defensive IM tactics lose their efficacy over time.\nGeneral discussion: The findings of these studies suggest that defensive IM tactics lose their power over time, but there was less evidence that ingratiation and self-promotion lost efficacy over time (although the findings of Study 2 suggest that the relationship between ingratiation and perceptions of likability may be stronger earlier on in a working relationship). This research is unique in that it examined the effects of both assertive and defensive IM tactics on both evaluations of likability and performance ratings, and these relationships were analyzed at multiple points in time. Accordingly, the findings of our studies extend our understanding of IM in at least two important ways.\nLimitations and directions for future research: Although this research enhances our understanding of IM in important ways, it also has several limitations. First, as noted earlier, the findings across the two studies were somewhat inconsistent. In particular, in Study 1, none of the findings with regard to performance ratings was significant. This may be explained by the fact that performance was held constant across all of the conditions in order to achieve greater control in our study. Assuming performance ratings are fairly objective, and evaluations of likability are more subjective, it is perhaps not surprising (in retrospect) that our Study 1 results were more relevant to the latter than to the former. In future studies, it would be worthwhile to employ research designs that vary performance systematically across different conditions in a way that might more reasonably allow one to examine differences with regard to performance evaluations.\nPractical implications: Maintaining a positive performance image is important to most people in organizations (Rosenfeld et al., 1995). How one should go about this, though, is not always clear. Much research and anecdotal evidence suggest that people should utilize IM in the workplace in order to be perceived as more competent, to be better liked, or to get ahead (Bolino et al., 2008). However, the findings of these two studies suggest that IM is more meaningful in earlier stages of a relationship than later ones. New employees would be wise, then, to recognize this small window of opportunity and to \"act\" accordingly. At the same time, our research suggests that the potential for IM to backfire, as emphasized in previous studies of IM (e.g. Crant, 1996; Turnley and Bolino, 2001), is something employees may need to be more cautious about in their initial interactions with supervisors than once they have established themselves. Lastly, supervisors should consider the initial influence that IM has on their ability to objectively evaluate their new subordinates.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of these studies was to investigate how the repeated use of impression management (IM) tactics is related to supervisor perceptions in newly formed supervisor-subordinate dyads.\nDesign/methodology/approach: Two studies were conducted - a lab study in which participants evaluated a confederate who performed an accounting task while using different types of IM across five trials, and a field study examining the IM tactics of new employees and their supervisors' ratings of likability and performance at two points in time.\nFindings: In the lab study, the repeated use of ingratiation had an increasingly positive effect on performance ratings, whereas repeated apologies had an increasingly negative effect on evaluations of performance. The influence of IM tactics on ratings of subordinate likability did not change with repeated use. In the field study, subordinates' use of apologies and justifications was more strongly associated with supervisor evaluations of likability and performance in earlier stages of their relationship.\nPractical implications: Employees need to be mindful that IM tactics may vary in their effectiveness depending on the timing and frequency of their use. Furthermore, supervisors should consider the initial influence that IM has on their ability to objectively evaluate new subordinates.\nOriginality/value: This research is unique in that it examined how the repeated use of both assertive (i.e. ingratiation and self-promotion) and defensive (i.e. apologies and justifications) IM tactics are related to both evaluations of likability and performance ratings at multiple points in time.\n",
"role": "assistant"
}
] |
Identification and causal assessment of supply chain flexibility
|
[
"Supply chain flexibility",
"Decision making trial and evaluation laboratory (DEMATEL)",
"Structural equation model (SEM)",
"Causality",
"Indian personal hygiene industry"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Indian personal hygiene industry includes personal care product manufacturers and raw material suppliers, distributors, retailers and customers (Wang and Uslay, 2018). This industry is one of the leading contributors in GDP of a nation. The fast moving consumer goods (FMCG) industry helps in the development of favorable economic conditions not only through generating export revenues, but also by developing high prospects for employment opportunities (Indian Brand Equity Foundation, 2009). Personal hygiene is important as it has been identified as a positive contributor to mental health among persons experiencing homelessness. Limited access to sanitation may also exacerbate chronic diseases, such as HIV or diabetes, by posing barriers to treatment adherence (Leibler et al., 2017; Skiest et al., 2007). The adequate supply chain of the said industry can contribute to maintain a decent status in the hygiene of society.\n2. Literature review: Flexibility is the ability of an operating system to adjust in reaction to uncertainties and simultaneous alterations in the linked supply chain activities (Fayezi et al., 2014; Wallace and Choi, 2011; Husdal, 2010). SCF has many dimensions right from inbound logistics to ensuring the delivery at customer end (Sanchez and Perez, 2005; Duclos et al., 2003; Garavelli, 2003). SCF requirements differ from firm to firm and also it is event dependent (Tipu and Fantazy, 2014). SCF includes the operational systems involvement, logistics practices, structure of the firm, supply network and information systems flexibility (Kumar et al., 2006; Lummus et al., 2003; Duclos et al., 2003). Flexibility in the supply chain design plays an important role in the firm's business (Bianchini et al., 2019; Stevenson and Spring, 2009; Aprile et al., 2005; Bertrand, 2003; Graves and Tomlin, 2003; Garavelli, 2003). In the buyer-supplier relationship, the flexibility helps to offer stability for the supplier and helps the focal firm to respond to demand variations (Milner and Kouvelis, 2005; Sethi et al., 2004). Information systems helps to improve the payment cycles, transparency, avoid over-production and decrease the inventories (Gosain et al., 2005; Fredericks, 2005; White et al., 2005; Fredriksson and Gadde, 2005).\n3. Rationale behind the study: In today's competitive environment, firms are adopting flexible supply chain strategies to meet the market demand. The Indian personal hygiene industry is full of competition. Firms in the sector are not aware about the different types of flexibilities applicable to their business in terms of a supply chain. Moreover, firms have no clue of existing causal relationship of each flexibility dimension. Once firms are aware about causal relations between SCF dimensions, they can analyse the influence of these dimensions on overall SCF, which is the main objective of the present research. SCF is defined to incorporate the flexibility dimensions that directly influence the consumers and is the shared responsibility of internal (manufacturing, marketing) and external (channel members, suppliers) functions of the firm (Sanchez and Perez, 2005; Dangayach and Deshmukh, 2001). Flexibility in supply chain is a prospective font to enhance the firm efficiency and can be a substantial measure of supply chain performance (Gawankar et al., 2017; Sanchez and Perez, 2005; Vickery et al., 1999). The researchers has identified quite a few dimensions of SCF (Barad and Sapir, 2003; Van Hoek, 2001; Lee, 2001; Narasimhan and Das, 2000). For instance, physical distribution flexibility helps in offering customized services at an economical price to consumer (Zhang et al., 2005; Van Hoek, 2001). Further, demand management flexibility aids businesses to respond fast and effectively with appropriate cost and delivery time (Zhang et al., 2005; Lee, 2001). The remaining SCF dimensions are described in Table II with their meaning. There is a sparse in the literature about SCF studies, especially for the entire supply chain where 22 dimensions of flexibility in supply chain are considered. The studies have been conducted in the sectors like automotives, and they have observed the impact of SCF on firm performance (Katiyar et al., 2015; Merschmann and Thonemann, 2011; Simaei and Jolai, 2006; Sanchez and Perez, 2005). However, the current study is about accessing the causal relationship of supply chain flexibilities with reference to the Indian personal hygiene industry. Once a causal relationship is identified, an investigation is done to understand the contribution of different flexibility dimensions on overall SCF. The present research may help the practitioners to adopt the right mix of flexibilities for efficient and effective supply chain performance. Figure 1 explains the rationale and research framework for the study.\n4. Research design: 4.1 Data collection\n5. Findings: After running the SEM developed for SCF, with the data table obtained from questionnaire responses, the output path diagram is as shown in Figure 7 with the unstandardized regression weights and error values shown alongside the variable and path. With the help of modification indices, covariance between error terms has been introduced to improve the model-fit measures. Since the latent variable flexibility has only one indicator, the error term for that single indicators variable is considered to have a mean of 0 and a variance of 0. This is because when using a single indicator, one must assume that the item is measured without error.\n6. Discussion: The study has multiple implications for practitioners. In the volatile, uncertain, complex and ambiguous world, it is necessary to have flexibility in the entire supply chain to meet the demand at every node. With the availability of multiple brands in the personal hygiene industry in India, it is essential for the firms to develop a production system which has the flexibility to absorb the fluctuation in demand. Adopting the flexibility at the outbound, inbound and in-house production system helps firms to produce the product at a minimum cost. The study finds that 11 dimensions have influence on their supply chain (Table VI) and answers the first research question. The scholarly and managerial implications can be found in the next sub-section.\n7. Limitations and scope for future research: The study has few limitations. The present study is limited to the Indian personal hygiene industry, and the results obtained can be validated through conducting the same research in other industries as well. Future studies may generalize the findings with different industries. The future studies can include the different tiers of suppliers to bring the flexibility in the firm's supply chain. The sales forecasting and reverse logistics flexibility can be other parts to understand the overall role in SCF. Customer service flexibility, capacity flexibility, cycle time flexibility can be considered for addressing the dynamic demands. The flexibility mix and match options can be optimized on the basis of variation is supplier lead time, demand patterns, the extent of modularization and base of suppliers in any supply chain. The flexible supply chain performance can be further measured on efficiency, responsiveness, sales growth and return on sales. The number of times a shortage is observed can also be a measure of flexible supply chain efficiency. The usage of information technology, IoT, AI, Blockchain or Big Data analytics may bring in flexibility into the systems, and the impact of these dimensions can be explored through empirical research.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to map the causal relations among various supply chain flexibility (SCF) dimensions having significant impact on the Indian personal hygiene industry.\nDesign/methodology/approach: The author(s) have gone through extensive literature review and extracted 22 SCF dimensions. After conducting field visits and expert interaction in the firm related to case industry, 11 major SCF dimensions that seem to have a significant amount of influence on supply chain performance of the firms were retained for further consideration. The author(s) have used decision making trial and evaluation laboratory (DEMATEL) to establish initial causality and structural equation model (SEM) to investigate the contribution of different flexibility dimensions on overall SCF.\nFindings: After DEMATEL analysis, three major SCF dimensions were considered for SEM modeling. The result shows that product flexibility and physical distribution flexibility have favorable influence on the SCF, while the demand management flexibility adversely impacts overall SCF.\nPractical implications: The approach adopted in the study can help firms to determine and exercise the flexibility dimensions of a particular supply chain. The DEMATEL and SEM offer a facilitation to explain the causal relationship between the different dimensions to take long-term decisions and address the uncertainty in the demand and supply side.\nOriginality/value: This paper has made an attempt to evaluate the supply chain flexibilities, prioritize them and evaluate the relationship amongst these flexibilities and the degree to which they affect or are affected by one another in the Indian personal hygiene industry.\n",
"role": "assistant"
}
] |
Economic fluctuations and crime: temporary and persistent effects
|
[
"Crime",
"Unemployment",
"Panel models",
"Hausman-Taylor specification",
"K42",
"J6",
"C33"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Nothing tends so much to corrupt mankind as dependency, while independency [...] increases honesty of the people. The establishment of commerce and manufactures, which brings about independency, is the best police for preventing crimes. (Smith, 1896, p. 155).\n1. Introduction: On an intuitive level, one should expect a strong relationship between crime and economic fluctuation socioeconomic status. During a downturn, income decreases and poverty and unemployment rises leading to social exclusion and deprivation which induces stress and frustration. These make for a frayed social fabric that causes crime to increase. In addition, a sharp change in income and wealth may provide an increasing motivation and growing set of opportunities criminal behavior.\n2. Data and descriptive statistics: In order to examine the issues discussed above, data on crime rates, unemployment rates and other covariates are collected at the US state level from 1965 to 2006. Several different categories of crime rates per 100,000 people from the Unified Crime Reports are used[4]: violent crime, property crime, murder, rape, robbery, aggravated assault, burglary, larceny theft and vehicle theft. Unemployment rates, the proportion of a state's population that are in various age ranges, the proportions which are white, African-American, or Hispanic, the proportions with no high school, high school, or college degrees, the proportion of adults who are divorced and the proportion of single female headed households with minors are obtained from the US Bureau of Labor Statistics website (www.bls.gov) or from Flood et al. (2015). A quadratic specification of time trends is also used in the regressions.\n3. Methodology and regression results: The above evidence relies on averages and ignores the effect of possible confounding factors, such as demographic makeup of the state and measures of the \"social fabric\" of the state. In order to control such confounding factors a more rigorous analysis is required. This section describes several different empirical methodologies to investigate whether unemployment is correlated with higher or lower crime rates, ceteris paribus. Since this paper utilizes panel data, the estimating equation has the form:\n4. Conclusions: On an intuitive level, one should expect a strong relationship between crime and economic fluctuation in socioeconomic status. Downturns make for a frail social fabric susceptible to social stress and disorder that causes crime to increase, as well as diminishing the economic resources available to economically vulnerable people. However, the literature on the effect of deteriorating economic fortunes as approximated by the unemployment rate on crime shows a weak relationship at best. Using crime-rate data for US states from 1965 to 2006, standard fixed effects estimations show few statistically significant effects of unemployment on a variety of crime rates, and when the relationship is statistically significant, it is negative - implying that increases in unemployment rates decrease crime.\n",
"role": "user"
},
{
"content": "Purpose: Since the literature on the effect of the unemployment rate as reflection of economic fluctuations on crime shows an empirically ambiguous effect, the purpose of this paper is to argue that a new way of modeling the dynamics of unemployment and crime by focussing on the transitory and persistent effect of unemployment on crime helps resolve this ambiguity.\nDesign/methodology/approach: Panel data for US states from 1965 to 2006 are examined using the Mundlak (1978) methodology to incorporate the dynamic interactions between crime and unemployment into the estimation.\nFindings: After decomposing the unemployment effect on crime into a transitory and persistent effect, evidence of a strong positive correlation between unemployment and almost all types of crime rates is unearthed. This evidence is robust to endogeneity and the controlling for cross-panel correlation and indicators for state imprisonment.\nOriginality/value: The paper is the first to examine the dynamics of the interaction of crime and economic fluctuations using the temporary and persistent effects framework of Mundlak (1978). In one set of estimates, one can evaluation both the short- and long-run effects of changes of unemployment on crime.\n",
"role": "assistant"
}
] |
"These kids are rebelling": a student-led transformation of community and critical literacy
|
[
"Case study",
"Qualitative research",
"Student agency",
"Classroom community",
"Ethnographic observations"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: \"Community\" is a concept regularly invoked in a range of settings, including educational spaces. However, despite regular use of the word, there is no universal notion of what \"community\" is, how it is built or supported, or how it matters - particularly, for this paper, in reference to teacher/student interactions and critical literacy.\nDefining \"community\" and \"critical literacy\": Because our analysis worked to explore students' and teachers' negotiations of both \"community\" and \"critical literacy\", we want to clearly explain how we understand each concept. \"Community\", with its long-term grounding in sociology (Geertz, 1973; Gruenert and Whitaker, 2015), is generally a contextually-informed concept, taking on characteristics associated with a specific location's or timeframe's cultural norms (Lewin, 1951; McKee, 2000). Because our focus on community was contingent on the interactions both between students and their teacher, and between the students themselves, our understandings of \"community\" were based on Chavis and Lee's (2015) discussion (Chavis and Wandersman, 1990). Chavis and Lee write:\nConceptual framework: A major component of Luke's understanding of critical literacy was the importance of community, and one often-overlooked but important element of community-based literacy was Luke's discussion of community-based rebellions, or \"uprisings\" (2012, p. 7). Luke explained that because critical literacy is necessarily transformative, those who have been marginalized might, as a community, actively work to challenge oppressive authority and to question the natures of generally-accepted truths and knowledge through literacy practices. Specifically, Luke noted that critical literacy empowers people and their communities to \"analyze, critique, and transform the norms, rules systems, and practices governing [...] everyday life\" (p. 5).\nLiterature review: Our conceptual framework and research questions guided our literature review, with the first section focusing on the ways that community-based support shaped students' experiences and agency, particularly in relation to rebellion, and the second section considering the influences of student community on instruction - especially opportunities for critical literacy.\nContexts: We recognize the degrees to which contexts shape communities and effective enactments of critical literacy, and thus provide contextual details for Val[1], a high school English teacher, the summer program in which he taught, and the students whose rebellion reshaped his classroom and their learning experiences.\nMethods: Though excerpted from a data corpus examining the English Department, this paper focuses specifically on a single participant, course, and group of students; therefore, we examine these data as a narrative-based descriptive case study (Merriam, 2007; Tobin, 2012). We chose to focus on this single instance for several reasons. Within the residential program, there were numerous opportunities to explore \"community\", and the English Department as a whole worked to encourage critical literacy, but the moment on which this paper focuses was unique and extended scholarship on English teaching in valuable ways. It was a moment when students - not adults - redesigned understandings of both community and critical literacy, and in doing so, simultaneously addressed injustice, reknitted their relationships, and supported a veteran educator's personal and professional growth.\nData analysis: After revisiting the data corpus, we considered a range of possible foci for analysis, but Val's public speaking course, particularly the class debate, demanded attention for the reasons previously explained. It was an anomaly in the summer program, given its disruption within and beyond classroom spaces, and was an instance that Val continued to revisit months after its occurrence. Deciding, then, to narrow our focus to this moment, we created the following research questions:\nFindings: Because students were from a range of schools and demographic backgrounds, Val encouraged class community by building students' trust and self-confidence, especially in relation to public speaking. Before the students' first speeches, Val told them,\nDiscussion: Luke (2012) pointed out that key elements of critical literacy are \"understanding learners' life worlds\" and \"framing and solving real world problems\" (p. 5). Critical literacy, when done effectively, should begin with and sustain community-based dialogue, while empowering \"new ways of knowing and acting upon the world\". The class debate first centered students' self-constructed community relationships and leadership, and then critical readings of themselves, Val, and course texts. This moment embodied all the potential that might be found in community-based critical literacy: student empowerment, teacher learning, and student-driven, relevant curriculum.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to consider the ways that students are shaped by and shape community and critical literacy, along with the ways that community affords student empowerment in an English class during a US high school summer enrichment program.\nDesign/methodology/approach: The qualitative methodological approach is a narrative-based descriptive case study. To provide a detailed and narrative-based discussion, the authors incorporate ethnographic observation narratives and conversational interview excerpts, and analyze the data through inductive coding.\nFindings: Organizing the findings into two sections, \"These kids are rebelling\", and \"We're trusting him to teach and do better now\", we first examine the ways that student-led rebellion reshaped the classroom community and then the ways that the teacher's response redefined critical literacy approaches and his interactions with the students.\nResearch limitations/implications: As this is a qualitative case study that is set during a summer enrichment program, its implications are not wholly generalizable to secondary English education. However, this research does suggest the importance of student agency in considerations of community and critical literacy.\nPractical implications: This research emphasizes the importance of acknowledging and exploring ways that students' everyday interactions and agency shape educational spaces. Additionally, this research suggests the importance of community and critical literacy to all teachers, no matter their levels of experience or success.\nSocial implications: Students have tremendous potential to not only shape and define learning environments, but to transform pedagogy and teacher relationships. This research emphasizes the importance of acknowledging and exploring these implications specifically to transform community and critical literacy in a summer high school English classroom.\nOriginality/value: First, this paper examines student community as an agentive and rebellious influence within the everyday constructs of schooling, and the authors assert that critical literacy pedagogies may be student-driven as part of community-based activism. Second, this paper seeks to explore both \"community\" and \"critical literacy\" as key concepts in positioning students as influential and empowered stakeholders with capacities to reshape education.\n",
"role": "assistant"
}
] |
CastAlum forges a more flexible, motivated and productive workforce: Bespoke training transforms the fortunes of component manufacturer
|
[
"Training",
"Awards",
"Organizational performance",
"Retention",
"Teamwork",
"Employee involvement"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Award-winning training helped a major UK supplier to automobile makers not only to survive one of the most difficult periods for the industry - when orders were drying up and many of its competitors were laying off employees, but also to thrive with the attitude that \"we are all in this together\".\nThreat of redundancies: CastAlum managers were forced to implement a four-day working week in order to minimize the need for potential redundancies and to retain the existing skilled and loyal workforce.Benefiting from the Welsh Assembly government's ProAct program, designed to support companies forced to introduce short-time working, the firm received a wage subsidy to return employees to full-time working, and help with external training costs.\"The management team recognized that the very sharp period of decline was affecting the whole automotive-manufacturing sector and was harming employee morale and motivation,\" Hannah Barrett explained.The company was aware that automotive original-equipment manufacturers and leading customers generally expect to see year-on-year efficiency, quality and cost improvements.\"The management team was concerned that unless this issue was addressed it could have a serious impact on future efficiency and quality performance, and could also result in increased labor turnover,\" she added.As a result of the training program, CastAlum witnessed increased employee engagement in continuous-improvement activities, better employee motivation and improved efficiencies. The company was able to retain its skilled workforce and streamline the introduction of new products for 2009.CastAlum worked with human-resource development advisor and training provider Holistic Services to design a program that would help employees to understand the business, its environment, their role in its future success and how they could contribute to improvements in quality and efficiency. \"The objective would ultimately benefit staff in terms of securing employment opportunities,\" explained Hannah Barrett.The three-day Rising to the Challenge - Building the Capability program saw senior managers taking part in in-house workshops together with the shop-floor workforce.\nThe four modules described: The program was divided into four modules. The first was designed to address critical factors related to individual engagement and motivation, in order to secure commitment. The second module explained the skills that help in the practical implementation of process and continuous-improvement activities.The third module was designed to widen the perspective on implications and organizational challenges as well as build on the practical steps in the second module in order to sustain improvements.The fourth module was designed to teach the critical elements of effective teamwork, including clear direction, clarity of roles, clear and effective processes and effective relationships. Additional training was provided for managers, to equip them with the skills to lead, develop, coach and facilitate their team members.The modules, designed to develop critical reasoning and problem-solving skills, \"were delivered in a practical and enjoyable way, through role playing and positive participation,\" Hannah Barrett enthused.The cost was PS87,000 for 88 delegates. This included the price of the training course and the time cost of the workforce doing \"off the job\" training. It also included Institute of Leadership and Management accreditation, materials and travel.\"The program has helped to develop a culture where everyone feels valued and the whole company realizes that we all need to work together to improve individual, team and company performance to secure our long-term futures,\" said Hannah Barrett.\"Due to the cross-functional make-up of the delegates on each course, we have been able to break down traditional demarcation between departments. Individuals have demonstrated an increased willingness to identify and implement continuous-improvement activities and morale appears to be very high.\"\nThe improvements realized: Since the training program, CastAlum has witnessed improvements, including:* increased flexibility and skills, with employees now more willing to be trained in new areas;* exceptional employee retention, with no employees having left the business in the past 15 months;* the generation and implementation of new ideas;* a new product successfully introduced; and* investment in the design, building and development of a machining shop, a subsidiary selling point for clients.These benefits have also contributed to improvements in company performance. Sales turnover in 2009 was more than PS13.337 million, against a forecast of PS12 million. Gross profit achieved was PS6.62 million - an increase of more than 13 percent on the previous year. Employee numbers have grown from 88 to 117. Overall equipment effectiveness continues to grow, reaching 65 percent. On-time delivery is at 95 percent. And a scrap/rework/rejects/quality of 6 percent has been achieved.\"By ensuring that the managers received additional development, we believe they now have the capability to support continued application of the principles of the program to achieve continuous and sustainable improvements,\" Hannah Barrett concluded.More than 60 employees have so far completed the program and the rest are currently taking the workshops.\nNote: David Pollitt, Human Resource Management International Digest Editor, wrote this article.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to describe an award-winning training program at UK aluminium die-casting company CastAlum.\nDesign/methodology/approach: - The paper explains the reasons for the program, the form it took and the results it achieved.\nFindings: - The paper details how the Rising to the Challenge - Building the Capability program increased flexibility and skills among the workforce, boosted employee morale and retention and encouraged teamwork and the generation and implementation of new ideas.\nPractical Implications: - The paper highlights the financial turn-around that the company achieved against the background of a downturn in the automobile industry.\nSocial Implications: - The paper details a training program that not only helped CastAlum to move from a four-day week to five, but also laid the foundations for expanding the workforce from 88 to 107.\nOriginality/value: - The paper provides the inside story of an award-winning and transformational training program.\n",
"role": "assistant"
}
] |
The role of star performers in software design teams
|
[
"Team performance",
"Team working"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Teamwork is a popular and widely used work design feature in today's organizations (Ilgen et al., 2005; Kozlowski and Bell, 2003). This specifically applies to knowledge-intense and complex work such as software design (Hoegl and Parboteeah, 2006; Jones, 1996). By definition, work teams are social entities that interact at varying degrees of interdependency \"to achieve specified, shared, and valued objectives\" (Salas et al., 1992, p. 4). One main research interest in team literature concerns the predictors of team performance. Whether teams are successful or not depends on team input variables (e.g. expertise, personality, team size) and on team process variables (e.g. information exchange, team climate, team affective tone, and conflict) (see, e.g. Kozlowski and Ilgen, 2006, for a review). In the present study, we focus on expertise as input variable since expertise has been considered to be one of the most important factors that predict team performance (Faraj and Sproull, 2000; Hackman, 1987; McGrath, 1984). Building on earlier research, we differentiate team members' expertise into task functions and team functions (Bales, 1950; Marks and Panzer, 2004; Stewart et al., 2005) with task functions referring to behavior that aids directly in the completion of work-related activities, and team functions facilitating the interpersonal interaction necessary to work together as a team.\nExpertise: Experts are defined as individuals \"who are consistently able to exhibit superior performance for representative tasks in a domain\" (Ericsson, 2006, p. 3). In teams, individuals can be experts in task functions and/or team functions. Task functions imply that someone is proficient in problem solving and knows how to tackle difficult technical problems whereas team functions indirectly contribute to the fulfillment of a team's task and goal requirements by directing and coordinating the efforts of other team members. Task and team functions can be considered as team roles which have been defined as \"a set of behaviors that are interrelated with the repetitive activities of others and characteristic of the person in a particular setting\" (Stewart et al., 2005, p. 344). Previous research has considered both task and team functions to be necessary and important for effective teamwork (Stevens and Campion, 1994). We refer to those individuals in task and team functions as experts who are judged by other team members as showing the respective set of behavior most often.\nSoftware design teams: Software design teams work on defined, specialized, and time-limited projects and often disband after they have finished a project (Sundstrom et al., 2000). Software design is a complex task and encompasses different sub-tasks such as requirement analysis, software design, programming, testing, and debugging (Sonnentag et al., 2006). In contrast with laboratory ad-hoc teams, software design teams exist for a longer time, often have to deal with new and unexpected challenges, and pursue meaningful goals such as the development of innovative and marketable products (Kiesler et al., 1994, Stempfle and Badke-Schaub, 2002).\nExpertise in task functions and team performance: For over 50 years, experts have attracted considerable research interest in decision-making research. Early research on decision-making in small cooperative problem-solving groups dealt with the question of how groups with different individual preferences come to a single collective group solution (e.g. Davis, 1973; Lorge and Solomon, 1955). Within this line of research, the \"truth-wins approach\" (Lorge and Solomon, 1955; Smoke and Zajonc, 1962) in which the correct answer is given when only one group member is correct was supported for tasks with strong demonstrability of correct solutions (Bonner et al., 2006). Such tasks with high demonstrability comprise intellective tasks, which are widespread in real-world settings and incorporate (software) engineering team tasks (Laughlin, 1980). For intellective tasks, the best group member was found to be necessary and sufficient for best group outcomes (Bonner et al., 2006; Henry et al., 1996) and exerted twice as much influence on other group members (Bonner et al., 2002). Experts' significance on decision making has also been shown to depend on the expert's centrality in the team, meaning that the total discrepancies between the expert and everyone else should not be too extreme; otherwise, expert's inputs are likely to be discounted (Davis, 1996).Although no research explicitly addressed the relative importance of experts in real-world teams, following the previously outlined findings from laboratory decision making research we propose that experts in task functions significantly predict team performance beyond the team's average score in task functions for several reasons: Experts in task functions possess superior domain-specific knowledge (Sonnentag, 1998). From frequent interactions with numerous opportunities for showing task-proficiency, other team members are able to recognize who is most knowledgeable in task functions (Goodman and Shah, 1992; Hollenbeck et al., 1995; Littlepage et al., 1997). Consequently, other team members should contact the expert frequently for information search and information exchange. Status expectation theory (Berger et al., 1972) also suggests that good performers are more central and will be considered more frequently as a source for information in team settings. According to status expectation theory, high status members (e.g. experts) are more actively involved in interpersonal interactions. Empirical findings support the assumption that experts are more engaged in the group's activities compared to other group members: Experts have been shown to mention and repeat more information, to be more central in groups, and to put emphasis on specific aspects of knowledge (Bunderson, 2003; Faraj and Sproull, 2000; Karakowsky and McBey, 2001). In accordance with these findings, we propose that the experts in task functions should significantly predict team performance beyond the average performance score in task functions.H1. The expert in task functions (Time 1) positively predicts team performance (Time 2) over and above the team's average expertise in task functions (Time 1).\nExpertise in team functions and team performance: In addition to the importance of experts in task functions on team performance, we propose that experts in team functions who guide the team process are also essential to the productivity of the team. The importance of team functions is also stressed in Stevens and Campion's (1994) typology of generic and transferable knowledge, skills, and abilities (KSAs). The examples of teamwork as described by Stevens and Campion (1994) include conflict resolution, collaborative problem solving, goal setting, and performance management. In this research, we build on Stevens and Campion's (1994) emphasis on the relevance of teamwork behavior and focus on intellectual stimulation as one important aspect of team functions. In terms of Stevens and Campion's typology, intellectual stimulation can best be classified into the interpersonal category of teamwork KSAs. We chose intellectual stimulation as a team function because maintaining other team members' motivation is crucial for success in knowledge-intensive teams (and beyond) (Keller, 2006). For example, if a team does not reframe problems, question assumptions, and approach situations in new ways, a team's performance will suffer, especially in software design (Sonnentag et al., 2006).Intellectual stimulation is defined as a way of encouraging team members to be creative by questioning assumptions, reframing problems, and approaching old situations in new ways. New ideas and creative solutions are thereby elicited from other team members (Bass, 1985). Team members high in intellectual stimulation encourage other team members to think about problems more thoroughly and to address difficulties (Keller, 2006; Zaccaro, 2001). Intellectual stimulation implies that these team members are acting as a teacher and engaging in behaviors that are innovative and new and that go counter to existing norms (Pearce and Conger, 2003). Software development teams, which usually deal with innovative tasks that require radical and original ideas, should benefit from intellectual stimulation. Team members high in intellectual stimulation may provide new and inspirational ideas without necessarily having the technical skills to implement these ideas (Bass, 1985; Pawar and Eastman, 1997). This implies that the expert member in team functions is not also considered the expert member in task functions. Accordingly, research provides evidence that the expert member in task functions is seldom also the expert member in team functions (Bales and Slater, 1955). Teams often informally choose team members who offer not only guidance but also a mission and purpose to the team of their own (Wheelan and Johnston, 1996), and informal leadership emerges from negotiated roles with team members (Seers, 1989). Zaccaro et al. (1991) found that informal leaders are individuals who are very good at identifying other team members' needs and requirements. These informal leaders have been shown to significantly affect team activities as well as the structure and productivity of teams (Wheelan and Johnston, 1996). With respect to intellectual stimulation, which implies the expression of radical and original ideas, we assume that the expert member takes special responsibility for a team's success, as a duplication of team roles could have a negative effect on team performance (Blenkinsop and Maddison, 2007). Summarizing, we propose that the score of the expert member in team functions - here considered intellectual stimulation - should predict team performance beyond the team's average score in team functions.H2. The expert in team functions (Time 1) positively predicts team performance (Time 2) over and above the team's average expertise in team functions (Time 1).\nPresent research: In our study, we investigate the importance of experts on team performance by means of both task and team functions. Our study contributes to the literature in several ways: First, our study extends previous research by analyzing the role of experts for overall team performance that goes beyond team members' average performance level. Second, when examining the role of experts in teams, we acknowledge Stevens and Campion's (1994) emphasis of teamwork behaviors and regard team functions separately from task functions in our study. Third, our study extends prior research on the role of experts in teams (e.g. Bonner et al., 2006; Henry et al., 1996) to a field setting, thereby increasing external validity.To ensure that our hypothesis testing was as valid as possible, we controlled for some variables that have the potential to influence our results. First, we controlled for team meeting frequency, as the expert member's effect on team performance might vary depending on how often team members communicate with each other in meetings. Second, we controlled for gender composition since heterogeneous groups are often worse in identifying and incorporating team member expertise (Dovidio et al., 1998; Thomas-Hunt et al., 2003). Third, experts' relative importance with respect to team performance might depend on task type. Thus, following Steiner's taxonomy (1966, 1972), we controlled for team members' evaluations of the task as being additive, disjunctive, and conjunctive. Additive task type means that team performance is equal to the sum of the team's parts; disjunctive task type implies that team performance is best predicted by the team maximum (i.e. expert team members); and conjunctive task type means that the team minimum (i.e. the weakest performing team member) best predicts team performance. As software design tasks consist of a mix of these three task type (Stewart and Barrick, 2000), we controlled for all three task types.\nMethod: Overview\nResults: Descriptive findings\n",
"role": "user"
},
{
"content": "Purpose: - This study seeks to extend previous research on experts with mainly ad-hoc groups from laboratory research to a field setting. Specifically, this study aims to investigate experts' relative importance in team performance. Expertise is differentiated into two categories (task functions and team functions) and the paper aims to investigate whether experts in task and team functions predict team performance over and above the team's average expertise level.\nDesign/methodology/approach: - Longitudinal, multi-source data from 96 professional software design engineers were used by means of hierarchical regression analyses.\nFindings: - The results show that both expert members in task functions (i.e. behavior that aids directly in the completion of work-related activities) and the experts in team functions (i.e. facilitation of interpersonal interaction necessary to work together as a team) positively predicted team performance 12 months later over and above the team's average expertise level.\nResearch limitations/implications: - Samples from other industry types are needed to examine the generalizability of the study findings to other occupational groups.\nPractical implications: - For staffing, the findings suggest that experts are particularly important for the prediction of team performance. Organizations should invest effort into finding \"star performers\" in task and team functions in order to create effective teams.\nOriginality/value: - This paper focuses on the relationship between experts (in task functions and team functions) and team performance. It extends prior research on team composition and complements expertise research: similar to cognitive ability and personality, it is important to take into account member expertise when examining how to manage the people mix within teams. Benefits of expertise are not restricted to laboratory research but are broadened to real-world team settings.\n",
"role": "assistant"
}
] |
AWB turns crisis into a scandal: Honesty really can be the best policy
|
[
"Corporate governance",
"Risk management",
"Change management",
"Corporate communications",
"Crisis management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: When rumors emerged about misuse of the \"oil for food\" aid program, by which the United Nations allowed Iraq to sell oil in exchange for food, medicine and other humanitarian needs, a company at the center of the allegations initially denied them and then tried to distance itself from the findings of a United Nations (UN) inquiry.\nCover-up as a mechanism to defend the indefensible: Unfortunately for AWB, its strategy of denial, shifting or sharing blame and equivocal apologies was unsuccessful in preventing further scrutiny and assuaging the needs of stakeholders for contrition and punishment. Even more unfortunate for the company was that this deliberately defensive stance of minimizing responsibility and accountability was contradicted by the findings of the royal commission. However, even more damaging was that the disingenuous nature of the strategy was exposed and that the company was caught out trying to frustrate and obfuscate the inquiry process.\nWhen self-interest is the only thing being protected: In no mood for admitting guilt, AWB took the alternative course of being deliberately disingenuous and deceptive in responding to the initial crisis - a position which proved to be totally ineffective in managing the crisis and significantly contributing to a public scandal which eventually engulfed the company. In due course, when dirty linen had been thoroughly washed in public through a US investigation, a UN inquiry and the Australian royal commission, there was a revised response from a new management team in the lead-up to the sale of the company to Canadian multinational Agrium.\nComment: This review is based on \"Things can get worse: how mismanagement of a crisis response strategy can cause a secondary or double crisis - the example of the AWB corporate scandal,\" by Grebe (2013) When an Australian wheat exporter was accused of illegal links with the Saddam Hussein regime, the company's initial response was found to be deceptive and disingenuous. This paper describes how mismanagement of a crisis can lead to an even greater scandal, and compares the relative outcomes of \"cover up\" vs \"coming clean\" strategies.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to highlight the circumstances in which a crisis-response strategy can compound a crisis. The paper focuses on AWB Ltd, Australia, where the organizational damage of a \"cover-up\" caused additional damage to the company.\nDesign/methodology/approach: It provides an insight into the application of theories and research on crisis and reputation management and the specific challenges and risks of corporate scandals.\nFindings: It demonstrates that corporate scandals can easily descend into a secondary or \"double crisis\" if incorrectly managed, or even mismanaged.\nPractical implications: Examines the initial failed response strategy and the more appropriate response eventually undertaken by the company.\nOriginality/value: Offers the additional insights of the author, as a former member of the management team at the company.\n",
"role": "assistant"
}
] |
Obstacles to upgrading customer value-in-use in retail banking
|
[
"Retail banking",
"Customer orientation",
"Customer perceived value",
"Value‐in‐use",
"Banking",
"Customers"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Despite retail banks attempting to improve levels of customer satisfaction, the business has suffered from declining customer loyalty in recent years. Fragmented demand and intensified competition have forced banks into price competition, reducing costs and dropping the level of face-to-face customer service (see e.g. Jagersma, 2006). In addition, online banking products and services, and the related automation, have changed service production radically (Acharya et al., 2008; Sarel and Mamorstein, 2003).Amidst ever-increasing competition, existing ways of gaining a competitive edge seem to be neither sufficiently effective nor durable. Developing customer perceived value may provide a new means to differentiate a service offering from that of competitors (e.g. Strandberg et al., 2012; Meyer and Schwager, 2007; O'Loughlin et al., 2004; Roig et al., 2006; Bick et al., 2004). However, as Frow and Payne (2007) have noted, few companies yet understand the meaning of customer value, in terms of what it looks like and how their managers can galvanise the organisation to deliver it exceptionally well. We suggest this is also true of the retail-banking sector, although the idea of customer orientation in that business is not new. Banks have traditionally been physically close to their customers through their extensive branch networks. The introduction of internet banking was predicated on a customer need for digitised banking services, and while encouraging its use, banks have also tried to maintain the impression of having close relationships with their customers by maintaining some face-to-face interactions. In spite of the current trend for internet banking, some banks recognise that a relationship strategy emphasising interaction, customisation and confidence building in their customer relationships is effective (Camarero, 2007; Molina et al., 2007), hence many customers are still offered personal advisors. However, it seems customers are not satisfied and do not feel they are gaining enough value from banking services. For example, O'Loughlin et al. (2004) argue that financial services providers continue to promote bank policies that are adverse to consumer interests, resulting in diminishing service quality and restricted delivery channels (see also Wong et al., 2008).Thus, it seems that in the present retail-banking business, the customer is a target of marketing activities, not the origin of plans for and maintenance of the service experience. We suggest that banking services are too often devised, developed and delivered without sufficient understanding of related customer activities, everyday practices and life, that is, of customers' value creation (see e.g. Payne et al., 2008). Banking services are to a very great extent provider oriented. They are products of a producer-centred mindset that gives primacy to the bank's own practices and processes (cf. Heinonen et al., 2010; also Vargo and Lusch, 2004, 2008a).It seems as if even the most innovative developments of the customer experience in retail banking are still based on a logic where the customer is \"forced\" to participate in the bank's process and tools (e.g. in the promotion of internet banking). This is not necessarily the evil intention of banks. Implementing customer orientation is challenging in any context (e.g. Gummesson, 2008; Vaux Halliday, 2002) and in retail banking, certain characteristics cause inertia militating against change. Although deregulation has increased competition, efficiency and flexibility in the bank sector since the 1980s (Lahteenmaki, 2006, p. 23), banks have long been subject to regulation, which is why bankers are focused on developing intensively internal production procedures, and may find it more difficult to shift the focus to customer value (Batiz-Lazo and Wood, 2001). In addition, financial services are today more heterogeneous and delivered through multiple channels, which provokes fragmentation and further difficulties in internal integration and collaboration across multiple channels and service producers in the network of service producers (see e.g. Ple, 2006).The above-mentioned challenges are particularly pressing because the process of customer value creation demands a variety of organisational capabilities be integrated (see e.g. Gronroos, 2006; Gummesson, 2008). Service providers should be able to integrate all customer-targeted actions into a coherent entity. In addition, the bank should understand how its customers will further integrate the service generated by that entity into \"their own activity system\", that is, their value creation process (Heinonen et al., 2010, p. 542).Given the noted challenges, the path to customer centricity may necessitate remarkable changes to an organisation's management system, culture, structure and processes (see e.g. Shah et al., 2006). However, the level of existing knowledge of the development needed from service producer-centred logic towards being a co-creator of value in the customer' process is still generally low (see e.g. Frow and Payne, 2007; Heinonen et al., 2010), and especially in a specific context like retail banking. There are plenty of general studies of barriers to implementing relationship marketing or market orientation in different contexts (e.g. Jaworski and Kohli, 1993; Piercy, 1998; Vaux Halliday, 2002), and specifically in a retail-banking context (Nielsen et al., 2003; Kolar, 2006; Harris, 2000; Bennett and Durkin, 2002). Those studies are reviewed at the end of the following section. However, value creation is always very context-specific (e.g. Gronroos, 2006) and the studies mentioned were often conducted in other contexts, or, those that are specific to retail banking lack a link with the value creation discussion, being more concerned with comparing different market areas (Nielsen et al., 2003), exploring specific market development (Kolar, 2006) or only one aspect of customer orientation, like organisational culture (Bennett and Durkin, 2002). Facilitating value creation and improving customer experience seems to be a very attractive way to create a new competitive edge in retail banking (e.g. O'Loughlin et al., 2004). Nevertheless, developing such capabilities may give rise to several managerial challenges in the retail-banking context. Our aim is to offer a holistic view on organisational obstacles to upgrading customer value-in-use in retail banking. The following theoretical section defines the main concepts involved, before the aim of providing a holistic understanding is addressed by using an explorative case method.\nValue-in-use in retail banking: theoretical foundation: Value-in-use\nMethodology: The chosen method for this paper is an explorative, in-depth, single case study. Qualitative research methods are appropriate, because there were no exact, prior theoretical constructs of what constitutes the change needed in a retail-banking organisation in relation to the phenomenon in focus (Shaw, 1999). For the same reason, this case study is explorative. One very information-rich case was chosen to reveal the process of developing customer experience and value creation in the retail-banking context and the related obstacles facing such efforts. The case study method supports researchers in their work to form a holistic understanding of a complex phenomenon in focus (Yin, 2003, p. 2; Kovacs and Spens, 2005).This study was preceded by a pre-study, which involved a meeting with a representative, either from senior management or the business development unit of all the banks active in the same local market. The bank chosen for the case study was selected because of its stated strategy of developing customer-centric practices extensively. The decision of the case company to do so offered a great opportunity to examine this phenomenon in the banking context in a longitudinal manner. This case was both intrinsic and instrumental (Stake, 1995). The case itself was interesting to investigate, and the case study made analytical generalisations possible (Yin, 2003, p. 10).Secondary data including earlier customer satisfaction surveys and other documents from the case company were used to provide background information. Primary data were gathered in a workshop, development sessions and thematic interviews and through participatory observation throughout the process (see Table I). One of the authors was involved in the development process, gathering data from its different phases. Before the interviews, the researcher facilitated a workshop in which a group of four managers created a common understanding of the value-in-use concept based on the existing theoretical understanding and with the help of a few case examples. In the two following development sessions, a researcher and one of the bank's managers (who was responsible for the research) created a loose framework of the meaning of the value-in-use concept in this specific context. In a subsequent executive board meeting attended by all the participating managers, the model was presented and further refined following discussion.The model was then used in the interviews to alert the managers to the potential of the concept and to discuss the obstacles that could impede its implementation. We wanted to understand how informants saw the value-in-use idea in the banking context compared to the prevailing value-in-exchange logic. Thematic interviews permit intensive interaction between a researcher and interviewee (see e.g. Kovacs and Spens, 2005). The whole senior management of the bank (nine people) was interviewed. Interviewees were first asked to describe the competition in their own business area, then to relate their own views on the development of a customer-centric orientation, the possibility of creating a competitive edge through use of this concept and also what the potential obstacles in the process might be. Finally, the discussion proceeded to address innovation in banking and how customer relationships influence innovation activities. The informants represent the retail bank's deepest understanding of the current banking business and strategic possibilities - and in addition have the ability to change that logic. The analysis of the interviews was presented in the final development session and workshop with the bank's executive to verify the results.The case study method supports an abductive research logic, enabling interplay between and simultaneous development of theoretical and empirical material (Kovacs and Spens, 2005). Our approach was abductive in the sense that theoretical and empirical understanding grew in the course of developing the value-in-use idea in the workshops, development sessions and interviews. Data analysis and data gathering occurred simultaneously, which is also typical of qualitative and abductive research (Stake, 1995, p. 9). The data were analysed with the help of content coding, theme-based categorising and mind mapping while themes arose from the data. The authors agreed with the organisation that the name of the company and the informants would remain confidential.\nEmpirical analysis: organisational obstacles to upgrading customer value-in-use in retail banking: The theoretical review has highlighted two important aspects of the structural/system level and cultural impediments to the change towards more customer-oriented business. The following empirical analysis is structured to reflect those aspects.Cultural obstacles\nConclusions: In this study, we researched the main organisational obstacles faced by a retail bank developing its customer orientation by encouraging a topical value-in-use mindset. Our empirical data clearly showed that changing the mindset within a bank is not without its challenges and requires change in the management system, cultural, structural and processual aspects (see also Shah et al., 2006; Harris, 2000). Those obstacles were found at both structure/system and cultural levels and in related business activities.For a retail bank, it is challenging first to understand the deepest meaning and character of customer value and second, to harness the resources of the organisation to deliver that value. The main barriers arise from the strong producer-oriented way of doing business, a lack of employee commitment, in the strong product- and sales-orientation, the restrictive network, difficulties in seeing the profit potential of adopting a new mindset, the lack of conceptualisation and proper segmentation, and finally, in a silo-style bank organisation. At the same time, these obstacles encompass the potential enablers of a competitive advantage. If a bank wants to gain a durable competitive edge based on a strong capability to co-produce value with its customers, addressing the obstacles described would present it with an operational roadmap to do so.Our study reinforces the idea of silos within organisational functions impeding development of customer centricity (see e.g. Nielsen et al., 2003), and how important a role reward systems have in facilitating cooperation and knowledge transfer across functions (e.g. Jaworski and Kohli, 1993). The current study also touches upon the importance of empowering employees involved with the customer interface (e.g. Bennett and Durkin, 2002). However, our study draws a more detailed picture of the situation in retail banks facing new challenges. It expands on the effects of a strong producer-oriented mindset, illustrates the characteristics of a strong sales orientation and the reasons behind it, shows how restrictive a company's network can be and how difficult it can be to understand the financial reasoning behind development needs. In addition, the study confirms that business practices like conceptualisation and segmentation have a role in the development towards customer centricity.Although there now seems to be a view of how bank services should be developed, it is very important to maintain a critical perspective and ask how far banks could go along the road to customer centricity. The profitability and efficiency of the company influence the right degree of commitment, and banks would need to investigate the extent to which they could focus on value-in-use while maintaining a balanced approach for each stakeholder group (see e.g. Gummesson, 2008) bearing in mind the resources available to the organisation. Banks must also not lose sight of the customers' real experience, as Gummesson asks, \"how intimate do they really want to get with suppliers?\" (2008, p. 317).Although there are different conditions - and a really customer-oriented approach demands much time and management effort - we believe that future changes in regulation, competition and customer behaviour will ultimately trigger a change in retail banking and force companies to focus on providing an integrated and holistic value-in-use.Limitations and avenues for future research\n",
"role": "user"
},
{
"content": "Purpose: - Improving capability to generate value for customers is seen as a way to create a new competitive edge, but developing the related organisational capabilities involves facing several obstacles that stem from the producer-orientation of the retail-banking business. This paper aims to focus on defining and describing those obstacles.\nDesign/methodology/approach: - An in-depth, qualitative single case study of a European retail bank was conducted.\nFindings: - A retail bank can find it challenging, first, to understand the deepest meaning and character of customer value and, second, to harness the organisational attributes to deliver that value. The main barriers lie in the strong producer-oriented way of doing business, a lack of employee commitment, a strong product and sales orientation, a restrictive network, difficulties in seeing the profitability aspect of a new mindset, lack of conceptualisation and proper segmentation, and finally, silo-style bank organisations.\nResearch limitations/implications: - The research is focused on the retail-banking industry, while the findings are transferable to other retail finance businesses.\nPractical implications: - Enhancing value creation may provide a competitive edge, but developing that edge means facing several obstacles, which may jeopardise the calculated positive return on investment. For management, understanding the value of the banking service in the customer's own context is critical.\nOriginality/value: - Developing customer experience and value-in-use has been suggested as a good starting point for customer orientation. However, research on its implications for organisations and potential obstacles to implementation remains scarce. This paper offers a detailed view on the organisational development necessary to generate value-in-use in a retail-banking context.\n",
"role": "assistant"
}
] |
Late bidding and the auction price: evidence from eBay
|
[
"Auctions",
"Internet shopping",
"Pricing"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Bidders' bidding strategies and the auction price have long been studied in traditional auctions. Researchers have spent decades attempting to model consumers' bidding behavior and how such behavior may influence the auction price. In most models of traditional auctions, a Nash equilibrium is often established. A Nash equilibrium in an auction defines a set of strategies for bidders such that no bidder can do better by bidding differently. In other words, if bidders are rational, their bidding behavior will eventually form a Nash equilibrium. Bidders' bidding strategies under Nash equilibrium are not specific bids, but a function of their private information.\nLiterature review: In recent years, a small but growing body of research on online auctions has begun to emerge in the marketing literature. Much of the literature focuses on two major issues. Some researchers examine how the auction price can be determined. For example, Ariely and Simonson (2003) report that high starting bids drive up the auction price, particularly when comparable items are not available. Kamins et al. (2004) conducted a field experiment to investigate the effect of a reserve price and a minimum bid on the final price of an auction. Suter and Hardesty (2005) extended Kamins et al.'s (2004) study by examining the effect of the starting bid on price in both laboratory and field experiments and find a positive relationship. Finally, Park and Bradlow (2005) developed an integrated model to explore factors (e.g., auction design and seller reputation) that may influence consumers' willingness to bid.Others investigate specific selling and bidding strategies. For example, Sinha and Greenleaf (2000) indicate how discrete bidding and bidder aggressiveness may affect sellers' optimal strategies. Wilcox (2000) reports that experienced bidders are more likely than inexperienced bidders to bid in the last minute and place a single bid in the same auction. Finally, Dholakia et al. (2002) show the existence of herding behavior; that is, bidders prefer to participate in auctions with existing bids.Late bidding\nHypotheses: The effect of late bidding on price\nMethodology: Data collection\nConclusions: Late bidding has become a widespread practice for online bidders. The data in this study indicates that 29 percent of the winners enter the auction at the last minute, and, on average, the winner's first bid occurs after 87.2 percent of the auction time has passed. Previous studies have suggested several benefits of late bidding, such as to avoid a potential bidding war, to improve a bidder's chance of winning, and to reduce a bidder's uncertainty (Wilcox, 2000; Roth and Ockenfels, 2002; Brint, 2003). However, no study examines the effect of late bidding on the auction price.The present study fills this gap. Empirical results indicate that late bidding does decrease the auction price and the result is robust. In other words, bidders are more likely to pay a lower price if they choose to enter an auction as late as possible. This study, however, does not find support for the moderating role of the auction type (IPV vs APV) on the effect of late bidding on price. One possible explanation is that for used computer CPUs, bidders may not rely on others' bids to form their valuations as much as we expected, considering that CPUs tend to have little variation in quality.Future research may expand this study by examining other product categories. For example, this paper selects an extremely competitive market with many sellers and bidders, while future studies may investigate a less competitive market with few traders (e.g., some rare antiques). On the one hand, it is expected that late bidding is less likely to occur and my have a weaker effect on the auction price in a less competitive market where bidding wars are rare. On the other hand, if bidders attend common value auctions, they may observe others' bids by bidding late so as to reduce their uncertainty and form their valuations. It would be interesting to investigate late bidding under different combinations of market structures (competitive vs. less competitive) and auction models (IPV vs APV vs CV).\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this study is two-fold. First, this study aims to examine how late bidding may have an effect on the auction price. Second, this study seeks to investigate how this effect can be moderated by different types of auctions.\nDesign/methodology/approach: - Data on computer CPUs were collected at eBay. A multiple regression analysis was performed to examine the effect of late bidding on price.\nFindings: - Empirical results indicate that late bidding decreases the auction price and the result is robust. This study, however, does not find support for the moderating role of the auction type (IPV vs APV) on the effect of late bidding on price.\nResearch limitations/implications: - This paper selects an extremely competitive market with many sellers and bidders. Future research may expand this study by examining late bidding under different combinations of market structures (competitive vs less competitive) and auction models (IPV vs APV vs CV).\nOriginality/value: - This paper is the first study that empirically tests the effect of late bidding on the auction price.\n",
"role": "assistant"
}
] |
An empirical investigation of tourist's choice of service delivery options: SSTs vs service employees
|
[
"Self-service technology",
"Need for interaction",
"Technology Readiness and Acceptance Model",
"Technology readiness index"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Advancement in information, communication and technologies has considerably developed the self-service phenomenon from the initial stages of vending machines, coin-operated washer-dryers and self-pumped gas, to today's pervasive offering of self-service technologies (SSTs) such as automatic teller machines (ATMs), kiosks, scanners, touch screens and automated phone systems (Kokkinou and Cranage, 2015; Kim and Qu, 2014; Meuter et al., 2000). SSTs have today become very widely accepted technological interfaces worldwide (Rust and Espinoza, 2006). Studies on innovation adoption and diffusion have emphasized discovering the determinants of technology adoption and suggested various approaches to this end (Gomezelj, 2016). Two such approaches that have gained recognition are:\n2. Theoretical background: 2.1 Technology acceptance model\n3. Hypotheses and model development: The proposed conceptual model (Figure 1) represents a situation in which both service delivery options (SSTs and employees) are accessible and service delivery through SSTs necessitates tourists' response at least to some extent. Both TRI dimensions as well as system-specific dimensions are direct measures of SST adoption (Lin et al., 2007); therefore, this study extends TRAM by incorporating an additional intrinsic desire (NI) which assumes more importance in an offline context (Kaushik et al., 2015). In the proposed model, PU mediates the effects of PEOU and all four personality dimensions of TRI on tourists' behavioral intention to adopt SSTs. However, NI has been introduced as a countervailing mediator of the same set of variables (PEOU and TRI's personality dimensions) between PU and behavioral intention to adopt SSTs.\n4. Methods: Tourists visiting some of the famous destinations such as Agra (Uttar Pradesh), Haridwar (Uttarakhand) and Shimla (Himachal Pradesh) in northern India were selected as target population because:\n5. Findings: 5.1 Profile of respondents\n6. Discussion: 6.1 Conclusions\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to offer and examine a conceptual model of tourist innovativeness toward self-service technologies (SSTs) to confirm whether tourists prefer service delivery by SSTs over employees in an offline hospitality environment.\nDesign/methodology/approach: Tourists' perceived usefulness (PU) of SSTs and need for interaction (NI) with service employees have been taken as crucial mediating variables to examine the effects of perceived ease of use and technology readiness index personality dimensions toward SST and employee-based service adoption.\nFindings: Findings reveal that both \"NI\" and \"PU\" play significant roles in Technology Readiness and Acceptance Model (TRAM) when tourists select one of two service delivery options - SSTs and service employees.\nResearch limitations/implications: The foremost limitation of the study is its dependence on domestic tourist samples. However, such samples were chosen because tourists comprising these samples tend to use similar service delivery options more, in turn increasing their use of SSTs available in sample hotels.\nPractical implications: The study gives a deeper understanding of TRAM with an extremely crucial mediating variable (NI) in an offline service context. It also provides useful insights to service providers and policy makers for developing new strategies and policies to enhance user experience.\nSocial implications: This study recommends the usage of numerous SSTs by tourists.\nOriginality/value: During extensive literature review carried out in this research, no study was found that proposed such an effective framework in an offline service context.\n",
"role": "assistant"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.