title
stringlengths 6
301
| keywords
listlengths 0
23
| messages
listlengths 2
2
|
|---|---|---|
Assessment centers vs situational judgment tests: longitudinal predictors of success
|
[
"Assessment center",
"Fidelity",
"Managerial simulation",
"Situational judgment test"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: In an Amicus Brief in the Supreme Court case of Ricci vs DeStefano, several industrial/organizational psychologists urged the use of high-fidelity work samples rather than low-fidelity work samples to minimize legal liabilities and maximize the efficacy of selection systems (Aguinis et al., 2009). The Ricci vs DeStefano case highlighted the fidelity of selection instruments pertaining to a highly physical job (firefighter), but also called attention to the importance of the fidelity of selection instruments overall. Specifically, what is the role of fidelity in selecting individuals for less physical complex roles, like managerial roles?\nThe role of fidelity in SJTs: In their conclusions, Lievens and Patterson (2011) noted the limitation posed by the written-SJT used in their study. Indeed, the format of the scenarios presented in an SJT is one factor that influences the validity of SJTs (Chan and Schmitt, 2002; Lievens and Sackett, 2006). Because the participant watches a physical interaction with the video SJT, it follows that video SJTs are higher in physical fidelity than written SJTs. Indeed, past research has demonstrated that in comparison to written SJTs, video SJTs are better predictors of interpersonally oriented criteria (Libbrecht and Lievens, 2012; Lievens and Sackett, 2006), and demonstrate higher criterion-related validity (Christian et al., 2010). Given that one of the overarching goals of the Lievens and Patterson (2011) study was to examine the impact of physical and psychological fidelity on the validity of managerial assessments, it is important to understand the potential bearing of using a video SJT instead of a written SJT.\nStudy hypotheses and research questions: The first purpose of this study is to examine the relationship between an AC and a video SJT. Prior research supports that both ACs and SJTs demonstrate criterion-related validity (Arthur et al., 2003; Gaugler et al., 1987; McDaniel et al., 2007), reduced subgroup differences in comparison to GMA (O'Connell et al., 2007; Ployhart and Holtz, 2008), and incremental validity beyond lower fidelity measures (Meriac et al., 2008; O'Connell et al., 2007). Research by Lievens and Patterson (2011) compared the relationship between an AC and written SJT and found the measures appeared to be measuring different constructs. The authors know of no study comparing the relationship between ACs and video SJTs, therefore no hypotheses are offered for the relationships among the managerial assessments:\nCriterion-related validity of ACs and SJTs: The central issue when evaluating the efficacy of any predictor in personnel selection contexts is the extent to which it is a valid predictor of some criterion variable (American Educational Research Association, American Psychological Association, and National Council on Measurement in Education, 1999). Prior research substantiates the criterion-related validity of both ACs (Arthur et al., 2003) and SJTs (McDaniel et al., 2001).\nMethod: Participants and procedure\nResults: Means, standard deviations, and correlations among study variables are reported in Table I. The relationship between the video SJT and the four AC dimensions was not statistically significant. In response to R1, the video SJT and the AC dimensions appear to be measuring distinct constructs. Also consistent with prior research (Lievens and Patterson, 2011; McDaniel et al., 2007; Meriac et al., 2008), GMA shared significant, albeit modest, variance with the video SJT and with three of the four AC dimensions.\nDiscussion: The Supreme Court Amicus Brief filed by Aguinis et al. (2009) focussed attention on the importance of physical and psychological fidelity of selection instruments. Research by Lievens and Patterson (2011) examined the role of fidelity in the validity of an AC and an SJT, both instruments often used in managerial assessment. This study presents an additional empirical comparison of the validity of ACs and SJTs, and in doing so provides additional empirical evidence for the distinctness of high and low-fidelity managerial simulations. Despite the high degree of psychological fidelity associated with SJTs and ACs, a video SJT and AC dimensions showed little statistical relationship. Nevertheless, both the AC dimensions and SJT remained valid predictors of career success over half of a decade following the test administration. It is noteworthy that the predictive validity of the SJT decayed with time whereas the AC dimensions appeared to gain predictive validity with time. Finally, the results suggest that AC dimensions explain incremental variance in career success beyond GMA and a SJT. These findings lend support to conclusions by Lievens and Patterson (2011) that SJTs and ACs both have criterion-related validity, SJTs and ACs are not viable substitutes for one another, and ACs show incremental validity beyond SJTs and GMA. The pattern of results found by Lievens and Patterson (2011) did not change, even using a video SJT as opposed to a written SJT.\nConclusion: Despite their continued popularity in personnel selection contexts, only one existing study has compared the validity of ACs and SJTs. Accordingly, this study makes several contributions to the personnel selection literature. In particular, our results indicate: an AC and a video SJT assess orthogonal underlying constructs and are related to, but distinct from GMA; both the AC and a video SJT explain meaningful variance in outcomes six years following the test administration; the validity of the AC is more robust to time-related decay than that of the video SJT; and the AC explains variance in organizationally relevant outcomes beyond a video SJT and GMA. Our results underscore the need for additional research investigating the impact of fidelity on the psychometric properties of managerial and non-managerial simulations.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to provide an empirical comparison of a high-fidelity managerial simulation, assessment center (AC) ratings, to that of a lower fidelity managerial simulation, a video situational judgment test (SJT) in the prediction of manager career success.\nDesign/methodology/approach: Archival data were collected from a large utility company. A measure of general mental ability (GMA), an SJT, and an AC were examined as predictors of career success as measured by increases in salary.\nFindings: The AC and the video SJT used in this study appeared to assess different constructs, extending previous findings that ACs and written SJTs measure distinct constructs. Furthermore, the AC dimensions and the SJT remained valid predictors of salary over a six year span following the test administration. In addition, the AC explained significant incremental variance beyond GMA and SJTs in career success six years after the assessment.\nResearch limitations/implications: The SJTs and AC used in this study are similar in psychological fidelity, yet the ACs remained a more valid predictor over time. The recommendation is that lower fidelity simulations should not be used as prerequisites for higher fidelity simulations.\nPractical implications: The results lend general support to the value of high-fidelity instruments in predicting longitudinal success.\nOriginality/value: The paper offers a comparison of the validity of ACs and video SJTs.\n",
"role": "assistant"
}
] |
Knowledge management in the Malaysian aerospace industry
|
[
"Knowledge management",
"Aerospace industry",
"Corporate strategy",
"Organizational culture",
"Malaysia"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The increasing importance of knowledge in the modern and globalized organization has initiated a need to focus on managing knowledge as an organizational and competitive asset (Davenport et al., 1998; Gupta et al., 2002; Sveiby, 1996) as well as a source of wealth (Davenport and Prusak, 1998). In addition, a well-implemented knowledge management (KM) program may lead to reduced operating costs, a faster new product development cycle and better customer service (KPMG Consulting, 2000; Liebowitz, 2000; Levett and Guenov, 2000; Shariq, 1997).Several of models for KM praxis have been proposed. Gupta and McDaniel (2002) describe KM as a strategic process and suggest that harvesting, filtering, configuration, dissemination and application are essential for effective KM. The \"Institutional Knowledge Evolution Cycle\" developed by Wiig (1999) consists of five stages: development, acquisition, refinement, distribution and deployment, and leveraging. Pike and Roos (2001) propose using Boisot's six-step social learning cycle as a KM framework to examine how organizations deal with the practicality of creating, managing and using knowledge. They contend that the six steps of scanning, problem solving, abstraction, dissemination, absorption and impacting represent a complete mechanism to describe the diversity of the KM processes. While the terminology used in these models is different, the processes of acquisition, processing, transfer and application of knowledge are recognized by all them.However, these models do not provide us with detailed descriptions about how KM is actually practised in a business environment. Further, these models are highly generic and of limited value for developing a particular KM strategy. In fact, the practical implementation of KM is influenced by our ability to understand how knowledge is used to perform work (Gore and Gore, 1999). Indeed, there has been little research and pragmatic discussion about how knowledge can be managed and used more effectively to support daily practice (Davenport, 1998; Levett and Guenov, 2000; Liebowitz, 2000; Salisbury, 2003). Moreover, most of the dialogue on KM in organizations is not about detailed tactics, but rather about high-level principles.As knowledge is an abstract construct, managing knowledge itself in reality is impossible (Gupta and McDaniel, 2002). Thus, only knowledge-related actions and practices can be managed. In this context, KM can only be made operational through a series of knowledge activities or projects (De Long and Fahey, 2000; Lytras et al., 2002). Further, knowledge activity must be integrated and become embedded in organizational process in order to reach its full potential to create economic value (Wiig, 2000).Although KM is now a well-established discipline and an emerging trend in industry, its application is rather confined to large organizations, such as those in the computer, oil and gas, pharmaceutical, telecommunication, financial and consulting services industries (Brand, 1998; Chase, 1997; Davenport et al., 1998; Liebowitz, 2000; Wiig, 1997).\nSuccess factors for KM: According to Hariharan (2002), many KM practices or projects fail because they are treated as technological in nature. This is evident in a recent McKinsey survey of 40 companies in the USA, Europe and Japan showing that many executives think that KM is all about sophisticated information technology systems (Hauschild et al., 2001). Thus, KM practices need to be closely integrated with business strategies and core competencies of an organization (Clarke, 2001; Hariharan, 2002; Martin, 2000; Wiig, 2000). To ensure success, KM programs must have horizontal cohesiveness, from the business needs to the improved processes that are supported by appropriate technology and human factors (Skyrme, 1999).Managing knowledge in a dispersed organization is complicated and is culturally sensitive (Barth, 1999). Furthermore, organizational culture is increasingly recognized as a major barrier to the implementation of KM (Bennett and Gabriel, 1999; Davenport et al., 1998; Ruggles, 1998). However, organizational culture is difficult to change and managing knowledge requires new mindsets. Some researchers (Barth, 1999; De Long and Fahey, 2000; McCampbell et al., 1999) have offered a few \"lessons learned\" to alleviate the sensitivities to cultural issues in practising KM. The \"lessons learned\" are:* Design KM systems that adapt to local conditions.* Promote a knowledge-friendly culture.* Anticipate miscommunication in multicultural teams.* Optimize corporate culture for knowledge sharing.* Be sensitive to the different cultures of functional competencies within the corporation.In a study carried out by Davenport (1997), one of the key challenges to KM practices was found to be overall management support. Accordingly, it is evident that greater management commitment will result in increased resources being directed to KM practices. The essential logic of successful KM is that top management creates vision and middle management creates conceptual resolutions and implements the defined practices. This is known as a \"middle-up-down\" (Nonaka, 1988, p. 18) approach that involves the whole organization in building a strong knowledge culture within the organization. Hence, there is a need for organizations to employ a structured approach to develop the production, flow, deployment and use of knowledge to create economic value (Clarke, 2001).While the literature has been relatively silent about the success factors necessary to implement effective KM practices in organizations, some inferences can be made from the literature to date. There appear to be three key issues that need to be addressed by any organization wishing to improve its KM practice. The first of these is Awareness Cultivation. This concerns the level of understanding of the fundamentals of KM, its theories and importance. The second is Objectives Definition. This refers to recognizing and realizing the strategic importance of KM as a tool within the organization. The third is Actions Implementation, which concerns the formulation of a plan which will enable KM assets to be used to support organizational objectives.These three factors provided a starting point to further build our understanding of how KM practice might be effectively implemented in an organization. Specifically, this investigation involved the Malaysian aerospace industry and sought to examine how KM might be successfully implemented as a strategic tool to enhance competitiveness.In the Malaysian context, existing research in KM is confined to examining its role and relationship to organizational factors (Narayanan et al., 2003; Telk Raj Sharma, 2003). The extent, to which KM is practised, specifically in high technology enterprises such as aerospace organizations, has yet to be established.The Malaysian aerospace industry has been classified as a knowledge-intensive industry (Government of Malaysia, 2001, p. 253). Furthermore, in accord with the national aspiration of achieving a technology competent nation by the year 2020 and the inception of the development of a knowledge-based economy, the growth of the aerospace industry has been intensified to make the industry the base for high technologies (Ministry of Finance Malaysia, 2001). However, in the pursuit of this ambitious national agenda, there has been a concern about the viability of the high technology, knowledge-intensive aerospace industry (IMD, 2001).These factors made the aerospace industry a potentially fruitful case study to investigate the use of KM as a strategic tool to sustain growth and increase competitive value in a knowledge-intensive industry.\nMethodology: This study involved two case studies and techniques derived from grounded theory using a Glaserian approach (Glaser and Strauss, 1967; Glaser, 1998) and convergent interviewing (Dick, 1998, 2000). Two Malaysian aerospace organizations were selected, given their place as leading aerospace companies in Malaysia with annual turnovers of over RM5 million. The nature of their operations was related to manufacturing of aircraft and parts, or maintenance and servicing of aircraft systems and components.Six key personnel were interviewed in one organization and five in the other, using purposive sampling. Selection was also based on the principle of attempting to seek disconfirming evidence. Thus people with a range of roles and potentially different views were interviewed, including managers in engineering, operations, business, administration, marketing and production.Interviewees were asked about: what their organizations knew and were doing to manage knowledge; the fundamental role and benefits of KM as a strategic tool in their organization; and the barriers and success factors in implementing KM. Follow-up questions depended on the responses of the interviewees and were used to expand, explore further or clarify points. Interviews were tape recorded to ensure accuracy of data analysis.A process of open coding (Glaser, 1978) was used to analyse the data after each interview. Coding was based on the qualitative evaluation of key sentences to form categories and theoretical concepts from the data. As the research progressed core categories emerged until saturation was reached.\nFindings and discussion: The initial analysis was concerned with understanding the way in which people in the aerospace industry understood KM and its implementation. Through the coding processes used in grounded theory, four key concepts were derived from the data analysis, each of which was comprised of a number of themes and elements. These are summarised in Table I.The first concept, Consciousness, consists of themes that demonstrated an awareness of what constitutes KM and its use as a tool in an organization. In an organizational context consciousness is awareness of a particular set of conditions that attribute to individual and organizational ways of doing things (Correia and Wilson, 1997). Benefits, the second concept, seemed to be centered around the idea of KM as competitive advantage. The third concept, Strategies, is concerned with sources of competitive advantage that emerge out of actions (Takeuchi and Nonaka, 2004). Lastly, Internal Factors are those organizational, individual and functional attributes that influence the way in which an organization does things (Correia and Wilson, 1997).This analysis revealed that the three discovered concepts, namely, Consciousness, Benefits and Internal Factors, support the three key issues identified from the extant literature. These issues, as discussed above in the literature review, are Awareness Cultivation, Objectives Definition and Actions Implementation. However, a new issue that builds on the derived concept of Strategies has been identified. This fourth issue, known as Strategy Adoption, has added a new dimension to our understanding on how to apply KM effectively.Strategy Adoption concerns the necessity for organizations to define and adopt relevant strategies in order to sustain the process of knowledge transfer, acquisition, creation and use. This study has identified knowledge sharing culture, best practices, R&D efforts and smart working as key strategic actions that can be adopted by organizations to sustain the implementation of KM.The contents of these strategic actions revealed a system-oriented approach (Telk Raj Sharma, 2003) to managing knowledge. Of special importance is the fact that knowledge sharing between individuals must be cultivated as part of the organizational culture. Therefore, a positively-oriented organizational culture increases the willingness to share knowledge between individuals. Furthermore, the emphasis to use organizational culture as a vehicle to link individual and organizational learning has been made in the literature (Brand, 1998; Williams, 2001).To enhance the effectiveness of knowledge acquisition, a structured and systematic way to capture the wealth of knowledge is desirable. In the context of this study, instituting best practice was identified as a mechanism to capture knowledge held in people's heads and individual experiences, such as the tacit knowledge, for dissemination to others in an organization. Moreover, best practice refers to the practice that provides measurable improvements (Johnson, 1997). Therefore, using internal best practice to perform day-to-day work can improve the quality of work and minimize the possibilities of repeating the same mistakes.Further, the study recognized that R&D efforts were the focus of organizational learning and a function of the organization's ability to utilize knowledge to create novel innovations. The organizations under study identified a significant surge of new knowledge acquisition and sharing activities soon after embarking on their first R&D project. In this perspective, R&D efforts were recognized as having played a major and active role in supporting KM activities.In fact, an organizational capacity to involve knowledge-intensive activities such as best practice and R&D depends greatly on its people's level of knowledge. Therefore, the focus of building a \"smart\" working environment emerged in both organizations studied. Smart working was perceived to be knowledge dependent and was deemed to be a desirable strategic action that could transform an organization into an intelligent organization. Nevertheless, accompanying the sentiment of smart working was a challenge to create a structured approach to proactively change employees' attitudes and work habits towards new knowledge acquisition.A subsequent analysis was concerned with integrating all four issues to develop a framework for the application of KM. Figure 1 depicts the four stages that represent the emerging phenomenon in a real-world situation. Further, it shows how the emergent themes derived from the study link to the concepts and stages within the framework. This framework includes a new concept known as Strategy Adoption that emerged from the study as discussed above.This framework provides a simple and practical sequence of activities that enable organizations to link KM practices to the working environment. Awareness Cultivation, concerns developing the level of understanding of the concept of KM and its importance. It can be assessed through measuring awareness and perceptions towards the benefits of implementing KM as well as through the organization's efforts at making tacit knowledge explicit and capturing it as organizational knowledge.The three emerging themes relating to Objectives Definition refer to the reasons why organizations are practising KM. They further reflect the internal and external tangible benefits for the organisation, as well as the personal development gained by individual workers. It is therefore important to ensure that all employees understand the benefits of KM so that they participate willingly and contribute to KM practices.The Strategy Adoption stage describes relevant strategic actions that can be adopted by organisations practising KM. These actions concern creating a knowledge-sharing culture, instituting best practise, engaging in R&D efforts and building a smart working environment as discussed above. These knowledge-intensive activities embrace innovative elements that enable practitioners to focus on translating the conceptual ideas of KM into a practical and working implementation programme.The Strategy Adoption stage identified in this study was found to have a significant influence on ensuring knowledge was created, shared and utilized consistently and effectively across the organizations studied. Thus Strategy Adoption appears to be an important new component to our understanding of knowledge management practice.The Action Implementation stage outlines internal factors within an organization affecting the success of KM activities. Strong support from management is crucial to building a knowledge-friendly working culture that accelerates knowledge sharing. More challenging is how to change the behaviour, attitudes and mindsets related to adopting new ways of doing things. Further, to ensure people's acceptance and active participation, knowledge activities should involve practical approaches that are incorporated into work processes. Thus, organizations must be focused towards KM practices that are appropriate, pragmatic and aligned with the business environment in which they operate.\nConclusions: Knowledge intensive industries, such as the one investigated in this study, need to develop effective KM practices in order to maintain their competitiveness. However, evidence about how to go about developing these practices is incomplete and justified the present study, which has potentially enhanced current theoretical models. However, the narrow scope of this study on a single industry in Malaysia limits the generalisations that might be made at this point. Further research with a broader cohort of industries and different context is, however, warranted.Integrating four major factors, a framework outlining four fundamental stages involved in practising KM has been generated by this study. These four sequential stages serve as a model that may enable KM practitioners to define process steps to implement KM. Further, the discovery and contribution of the Strategy Adoption stage has advanced the existing literature on the application of KM by describing pragmatic strategic actions that can be used for building effective KM praxis. A greater emphasis on an organizational-oriented approach that adopts a systematic strategy in managing knowledge will go a long way to enhancing KM as a strategic tool.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to study existing knowledge practices, the use of knowledge management (KM) as a strategic tool and the factors affecting the use of KM in the Malaysian aerospace industry.\nDesign/methodology/approach: - Senior personnel in the industry were interviewed using a grounded theory approach and convergent interviewing. Secondary data in the form of company documents and reports were used to clarify issues raised in the interview. Triangulation of the findings with the literature resulted in the identification of a framework for the application of KM practices.\nFindings: - The framework identified in this study consists of four stages: awareness cultivation; objectives definition; strategy adoption; and action implementation. Each of these stages involves a number of key elements for successful KM implementation.\nOriginality/value: - KM is a considerable organizational asset as a source of competitive advantage and wealth. Thus, it is vital to identify those factors critical to the successful implementation of KM practices in organizations.\n",
"role": "assistant"
}
] |
Knowledge centricity and organizational performance: an empirical study of the performing arts
|
[
"Performing arts",
"Knowledge management",
"Organizational performance",
"Structural equation models",
"Knowledge centricity"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In a fast-paced, knowledge-driven society, successful organizations must be able to use knowledge to advance their goals and missions. The concept of a knowledge-centric organization has emerged in a variety of industries. This concept describes how knowledge serves as a critical component of organizational effectiveness (Stonehouse and Pemberton, 2005). A knowledge-centric organization uses collective knowledge to advance organizational goals and shares knowledge among people and departments (Crawford et al., 2009). Knowledge-centric organizations gather and leverage disparate sources of data and information and view knowledge as a core value. More importantly, knowledge-centric organizations gain a competitive edge through the ability to innovate, operate more effectively and respond more quickly to changes in their environment (Grant, 1996; Malhotra, 2001). The conceptual origins of the knowledge-centric organization are closely associated with knowledge management and the increased importance of knowledge as a source of value for companies, institutions and societies (Foray, 2004; Mosconi and Roy, 2013). Knowledge centricity is also aligned with the theory of organizational knowledge creation and the important role that individuals and organizations play in articulating and amplifying knowledge (Nonaka, 1994; Nonaka et al., 2006).\n2. Literature review and hypotheses development: 2.1 The knowledge-centric organization\n3. Data and methods: This study focuses on nonprofit organizations, which are the prevalent structure in many performing arts industries (Markusen et al., 2011; DiMaggio, 2006; Caves, 2000; Frey, 1994; Hansmann, 1981). The researchers analyzed a cross-sectional sample of small and mid-sized nonprofit performing arts organizations in the USA. As is common in the arts and cultural sector (Greater Philadelphia Cultural Alliance, 2014, 2006), small and mid-sized organizations are defined as those with annual operating budgets of US$1m or less. In the USA, small and mid-sized nonprofit arts organizations comprise 90 per cent of the total population of nonprofit arts and cultural organizations (National Center for Charitable Statistics, 2015). From these small and mid-sized organizations, the study focused on performing arts disciplines as a means to add some consistency to the sample by studying organizations that had related missions or programming. Performing arts disciplines were defined per the National Taxonomy for Exempt Entities using codes A60, A61, A62, A63, A65, A68, A69, A6A, A6B, A6C and A6E. The sample of organizations studied is herein referred to as a \"cohort.\"\n4. Results: 4.1 Descriptive statistics\n5. Discussion: This article takes stock of the vast literature on knowledge management practices, with a focus on the concept of knowledge centricity. Although the term is not completely new in the literature, the authors noticed that there is no clear agreement on its definition, conceptualization and operationalization. Therefore, they propose a newly defined conceptual framework of this concept and a new working definition of a knowledge-centric organization. Moreover, this conceptual framework has been tested on a cross-sectional sample of small and mid-sized nonprofit performing arts organizations in the USA. The factor analysis conducted on organizational practices brought forth eight thematic variables that represented a majority of the total variance. They served as a measure of knowledge centricity and were validated by SEM analysis. Furthermore, several relationships between knowledge-centric practices and organizational performance metrics were identified. These relationships relate to issues of staff training and development, technology utilization, audience development and financial sustainability. The implications for the validation of our initial set of hypotheses are substantial and presented below in sequential order:\n6. Limitations: While a significant amount of analysis can be conducted using the above method and data sources, there are some limitations to the scope of this approach and to the methodological frame of the present research. While the sample can be considered representative of the larger population of this type of organization, these organizations are not geographically spread uniformly throughout the USA. This is because the secondary data from the CDP are available only in 14 states and the District of Columbia. Within this limited geographic region, many rural communities are not well represented in the study cohort, as a majority of the cohort are clustered in urban areas.\n7. Conclusion and implications: The approach and findings of this study provide valuable theoretical and practical implications, advancing an understanding of knowledge centricity and its impact on organizational outcomes. While this study was focused on nonprofit performing arts organizations, the nature of the knowledge-centric practices studied and methods of analyses used demonstrate the study's relevance to the broader nonprofit sector and, with some minor adaption to and validation of the survey instrument, to small and mid-sized organizations in a range of industries.\n",
"role": "user"
},
{
"content": "Purpose: This paper explores the concepts of knowledge-centric organizations in the performing arts sector to understand how specific organizational practices relate to measures of financial and operational performance.\nDesign/methodology/approach: A quantitative analysis of 368 small and mid-sized nonprofit performing arts organizations in the USA was undertaken via primary data on organizational practices and secondary data on performance metrics. Structural equation modeling was used to validate the latent construct of knowledge centricity and to test hypotheses on how knowledge centricity impacts financial and operational performance, and is influenced by firm-level demographic variables.\nFindings: Results show several distinct performance metrics that are statistically associated with knowledge-centric practices of the organizations analyzed.\nResearch limitations/implications: This article investigates the knowledge centricity of organizations, a relatively nascent theoretical concept, which is of significant relevance in today's knowledge-driven economy. The findings can serve as a basis to further investigate strategic approaches that arts organizations can undertake to remain sustainable and operate effectively in a knowledge-driven society.\nPractical/implications: It provides critical insights into management practices and approaches that can be instituted to drive improved organizational performance.\nOriginality/value: Building on the extant literature, this article develops a conceptual framework of knowledge centricity and defines a knowledge-centric organization. It thoroughly investigates the latent construct of knowledge centricity, identifies how knowledge centricity impacts financial and operational performance of nonprofit performing arts organizations, and provides grounding for future studies.\n",
"role": "assistant"
}
] |
Market orientation, strategic flexibility, and performance: implications for services providers
|
[
"Market orientation",
"Marketing strategy",
"Response flexibility",
"Competitive advantage",
"Services",
"Performance measurement (quality)"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\nIntroduction: In the era of globalization, which has brought about unprecedented changes in the service economy, organizations of all sizes and structures are searching for strategies to improve performance without sacrificing quality. A market orientation that provides for market-focused strategic flexibility to sustain competitive advantage is a strategic solution. Characteristically, the service sector is diverse, ranging from large multinational corporations to small-and-medium enterprises. With the increasing importance of the service sector, the quality, competitiveness, and efficiency in service companies are predicted to be key drivers of growth and performance.Competitiveness in the service industry, rather than in manufacturing, will propel the global economy in the twenty-first century. A popular belief among consumers is that the service economy is new and is only recently replacing the obsolete manufacturing base developed in the twentieth century. Zeithaml and Bitner (2003) provide a more accurate picture of the long-term influence of the service economy:* as early as 1929, more than half of the working population was employed in the service sector;* approximately 54 percent of gross national product (GNP) was generated by services as early as 1948; and* the trend has continued and as of 1999 services represented 80 percent of employment and 78 percent of gross domestic product (GDP).The USA is the world leader and champion in the transformation to a service economy. The transformation from a manufacturing-based to a service-centered economy seems to be nearly complete. The fastest growth rates in job formation and in number of jobs are in service industries. In 2000, there was an $81 billion US trade surplus in services while manufactured goods remained in the red (Bach, 2001).A great deal of customer communication, interaction, and knowledge transfer relies on a consistent and committed use of the market orientation philosophy. Since service delivery requires human interaction with external and internal customers, a market orientation should be particularly important for a service firm (Schneider, 1987). What are the implications of a market orientation for the growing number of service providers? Should service providers utilize the market orientation philosophy differently than their goods producing counterparts? Managers, employees, and other stakeholders need to be asking these questions given the significance of the service sector. The importance of a market-oriented culture is crucial to all levels of the modern organization (Day, 1990; Deshpande and Webster, 1989). Service providers should be part of this universe of modern organizations. However, service marketing is different. When purchasing goods, the consumer employs many tangible cues to judge quality, including color, style, finish, package, and fit. When customers purchase services, fewer tangible cues exist. In many cases, tangible evidence is limited to the service provider's physical facilities, equipment, and personnel. Differences in the inseparability, intangibility, heterogeneity, and perishability of services underscore the importance of a market orientation (Lovelock, 2001).This article draws on the literature in market orientation and services marketing to suggest implications for service providers who adopt a market orientation. According to Caruana et al. (1999), the constructs of market orientation and service quality are related. Market orientation's positive association with market performance provides the critical foundation (Narver and Slater, 1990). However, research focusing on market orientation concepts as they relate to the service sector is sparse (Van Egeren and O'Connor, 1998). This investigation extends current thinking by integrating market orientation and market-focused strategic flexibility within the services marketing arena. A model Johnson et al. (2003) developed for goods marketing has been adapted and extended to apply to services marketing. Discussion of this new model for service providers is the basis for the framework development contribution of this paper. In addition, the \"strategic wheel of service performance\" provides a framework to discuss the managerial implications that are forthcoming from an integration of thinking regarding market orientation, strategic flexibility, competitive advantage, and service performance. The focus and contribution of this research is framework development and managerial implications, not empirical testing.\nMarket orientation, strategic flexibility, and services framework: Market orientation\nPerformance implications for service providers: A service firm's judgmental and objective performance can be enhanced by improving the organization's CRM, retention, satisfaction, loyalty, and lifetime value (LTV). Specific customer-focused strategies for each performance type depend on the service organization's strategic flexibility, adaptability to environmental turbulence, and achievement of a competitive advantage. The following discussion focuses on managerial implications in these areas, as depicted in the \"strategic wheel of service performance\" (Figure 2), with specific company examples.CRM\nConclusions and guidelines for managers: The impact of developing a market orientation for service providers should be higher levels of CRM, retention, satisfaction, loyalty, and LTV. Moreover, these implications should be even more visible and meaningful for service providers as opposed to goods marketers due to the nature of services themselves. The saliency of market orientation in service firms is underscored by the pivotal role that it plays. Caruana et al. (1999) found empirical support that the reliability of a service firm is positively related to its judgmental and objective performance.The context of this framework is further complicated by the fact that many, if not most, of the service implications are interrelated. They all would not necessarily change in unison. However, it follows that increases in customer retention, for example, should be related to increases in satisfaction and loyalty (Figure 2). Increases in one or more of these variables should help service providers improve their judgmental and objective performance.The following guidelines should be useful to service provider managers who plan to develop a business philosophy based on market orientation:* The service economy is large, growing, and represents an ample opportunity to practice market orientation principles to determine their extended applicability to service providers.* The market orientation literature offers many insights into competing in today's competitive marketplace for service providers. Investments in customer profitability, retention, and loyalty programs should pay significant dividends for service marketers.* Developing a market orientation has the potential to provide several positive financial and strategic performance consequences for service firms who are successful at integrating this philosophy across their organizations.* Managers working on integrating a market orientation into their organizations should not expect gains in specific financial and strategic business performance over the short term. To integrate these principles properly across organizational boundaries requires a longer-term time horizon.* Service organizations will need to develop accurate measurement instruments to quantify the levels of the financial and strategic business performance criteria discussed.* Managers will need to design customer metric feedback mechanisms that are cross-functional to help determine when marketing strategies need to be modified.* Developing and maintaining a market orientation cannot be the responsibility of the internal marketing department alone, but must be fully integrated across all functions of service providers.\nLimitations and future research: Service providers will greatly benefit from such a framework that embraces the marketing orientation construct, strategic flexibility, and service performance. Research emphasizing market orientation concepts as they relate to the service sector is sparse. Clearly, more conceptual and empirical research needs to be conducted to expand the market orientation philosophy from product marketer to service provider.As with most articles based on conceptual arguments, this paper is subject to certain limitations. The framework discussed here has not been tested empirically. Empirical evidence based on a cross-section of service providers would extend this initial discussion further. Subjecting the framework to analytic rigor would allow scholars, as well as practitioners, to understand more fully the inter-relatedness of the service implications discussed.\n",
"role": "user"
},
{
"content": "Purpose: - This article proposes investigating implications for service providers who adopt a market orientation. It hopes to extend current thinking by integrating market orientation and market-focused strategic flexibility.\nDesign/methodology/approach: - A model is extended to apply to services marketing. The \"strategic wheel of service performance\" provides a framework to discuss the managerial implications from integration of market orientation, strategic flexibility, competitive advantage, and service performance.\nFindings: - The impact of developing a market orientation should be higher levels of customer relationship marketing (CRM), retention, satisfaction, loyalty, and lifetime value (LTV). Increases in one or more of these interrelated variables should help service providers improve their judgmental and objective performance.\nResearch limitations/implications: - More research needs to be conducted to expand the market orientation philosophy to the service provider. Subjecting the framework to analytic rigor would allow scholars and practitioners to understand more fully the inter-relatedness of the service implications.\nPractical implications: - Practice implications of the paper are: the service economy is an opportunity to practice market orientation; investments in customer profitability, retention, and loyalty programs pay dividends; A market orientation enhances financial and strategic performance; integration of principles across organizational boundaries requires a long time; financial and strategic business performance criteria need to be quantified; cross-functional customer feedback mechanisms need to be designed; and market orientation must be integrated across all service function providers.\nOriginality/value: - The conceptual framework integrates market orientation, market-focused strategic flexibility, strategic competitive advantage, and subjective and objective performance outcomes as applied to service providers. The discussion strengthens the strategic value of market orientation and provides managerial implications for the services sector.\n",
"role": "assistant"
}
] |
Measuring host sincerity: scale development and validation
|
[
"Scale development",
"Scale validation",
"Cultural consumption"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In relative terms, extant research overlooks the notion of \"sincerity\" and its role in stimulating visitor interest in places and their local inhabitants (Prentice and Andersen, 2007). Meanwhile, authenticity and its related concepts receive sustained attention in hospitality and tourism studies (Kolar and Zabkar, 2010), and an interest in capturing and understanding what is perceived to be \"real\" inspires research across disciplines (Beverland, 2005). Cultural places should be \"both authentic and sincere, and not unnecessarily staged or superficial\" (Prentice and Andersen, 2007, p. 95). Here, authenticity refers to destinations and their offerings felt to be meaningful, whereas sincerity explains relationships with people perceived to be meaningful (Prentice and Andersen, 2007; Prince, 2017). Extant research emphasises two of the main concepts relating to perceived authenticity: existential and object-based authenticity (Bryce et al., 2015). However, the lack of a specific measurement scale for host sincerity leaves a vital knowledge gap and offers the opportunity for a much-needed investigation to assess the sincerity concept alongside the destination authenticity's component parts.\n2. Literature review: 2.1 Host sincerity\n3. Methodology and scale development: In response to calls encouraging scale development in tourism and hospitality research (Khan and Rahman, 2017), this study followed the systematic scale development approach (Churchill,1979; Netemeyer et al., 2003) to develop a host sincerity scale. This included the following elements:\n4. Results: 4.1 Results of Phase 2: item purification\n5. Discussion and conclusions: This study examines an often-overlooked aspect of the travel experience - tourists' genuine interactions with local hosts. This is vital to what Taylor (2001) considers as \"sincere\" tourism and hospitality and presents a more realistic representation of both the destination and culture of those who live and work there, an experience increasingly pursued by tourists. Thus, this research replied to the need for a new scale to measure the concept of \"host sincerity\". To achieve this, it followed the multi-step mixed method scale development procedure (Netemeyer et al., 2003). The host sincerity construct has been tested and validated using a sample of international visitors to Kandovan and Cappadocia and can be used as a research instrument in future studies aiming to assess host sincerity in the hospitality field. As no previous study has been conducted on the development of a host sincerity scale, this research serves as the first empirical evaluation of the concept, significantly contributing to both the theory and practice.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this study was to develop and validate a scale for host sincerity.\nDesign/methodology/approach: The ten-item host sincerity scale was developed by following a multi-stage scale development procedure (Delphi technique, qualitative interviews and surveys).\nFindings: The findings reveal that host sincerity is a second-order construct with two underlying dimensions: \"sincere social interaction\" and \"sincere emotional response\". By incorporating host sincerity into the consumer-based model of authenticity, the findings established significant relationships among all constructs, confirming the predictive validity of the host sincerity construct.\nResearch limitations/implications: Data were gathered from visitors to troglodyte heritage sites (Kandovan and Cappadocia). Future studies should test the newly formed sincerity scale at other cultural destinations to further explore the generalisability of the scale. Further, data were gathered from tourists. Future studies should consider host sincerity from a host perspective.\nPractical implications: Cultural destination managers and local hosts can use this instrument as a supplementary tool to evaluate how sincere their hospitality offering appears to tourists.\nOriginality/value: This paper develops a host sincerity scale to explore the importance of sincere host-guest interactions and tourists' emotional response to these interactions. It extends the consumer-based model of authenticity by drawing further attention to the importance and impact of host sincerity in stimulating memorable tourism experiences.\n",
"role": "assistant"
}
] |
A novel steady-state maintenance simulation framework for multi- information disseminations in crowd network
|
[
"Emergence",
"Crowd network",
"Multi-source information dissemination",
"Steady-state"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The phenomenon of crowd intelligence (Chai et al., 2017) can be easily observed in our society, such as \"two heads are better than one\" and \"everybody's business is nobody's business.\" With the rapid development of information technology, crowd intelligence is more extensive and complex. People, enterprises, governments, equipment and articles become more and more smart with the ever expanding of intelligent depth, intelligent width and connection scale to form various crowd network systems (such as e-commerce platform, networked manufacturing supply chain, Wikipedia and network election).\n2. Related work: Simulation is an important way to study public opinion control in crowd science multi-source information system. In this process, a huge amount of information data are produced, and illegal information is inevitable in these data, such as rumors. These impact on the country and society stability cannot be underestimated, so this paper cannot guide such public opinion on the real network, that is, simulation has become an important method to study public opinion control. Information dissemination refers to the initial communicator spread information to other groups. The current social network is interactivity, timeliness and suddenness, its interaction process is the information dissemination. How to judge steady-state from data and to mining potential water army based on public opinion, so that the social network keep stable state have great significance. Therefore, the issue of govern water army in information dissemination has become the hot research by many domestic and foreign scholars.\n3. Control system framework: 3.1 Member mode\n4. Simulation result: According to the steady-state maintenance simulation framework in this paper, the simulation experiment of multi-information disseminations is designed and simulation result is shown in Figure 4. Among it, the number of primitive unit is 430, the number of collective unit is 130 and the number of iteration is 1,000. Horizontal is the number of iteration, vertical is the sum of information intensity in the network, green is intensity change without any new information centralized delivery, blue is intensity change with random information delivery and red is put specific information according to unit's preference.\n5. Conclusion: Crowd network is rising and developing in our daily and economic lives, gradually replacing social networks. Compared with existing social networks, the structure of multi-source information dissemination systems is more changeable. Many intelligent subjective in crowd network system are self-centered subjects, it has consciousness and behavior's diversity and personalization. Its massive nodes are unmatched by traditional networks. Interaction relationship of many intelligent subjective constitutes the network structure. Their behavior is a combination of rationality and irrationality, which makes the intelligent subjective more in line with the word intelligent, single node's intelligence and the overall intelligence of the system work together, which is bound to bring greater challenges in maintaining system status.\n",
"role": "user"
},
{
"content": "Purpose: The era of crowd network is coming and the research of its steady-state is of great importance. This paper aims to establish a crowd network simulation platform and maintaining the relative stability of multi-source dissemination systems.\nDesign/methodology/approach: With this simulation platform, this paper studies the characteristics of \"emergence,\" monitors the state of the system and according to the fixed point judges the system of steady-state conditions, then uses three control conditions and control methods to control the system status to acquire general rules for maintain the stability of multi-source information dissemination systems.\nFindings: This paper establishes a novel steady-state maintenance simulation framework. It will be useful for achieving controllability to the evolution of information dissemination and simulating the effectiveness of control conditions for multi-source information dissemination systems.\nOriginality/value: This paper will help researchers to solve problems of public opinion control in multi-source information dissemination in crowd network.\n",
"role": "assistant"
}
] |
Welfare schemes in India: decentralization dynamics and stakeholder influences
|
[
"India",
"Stakeholders",
"Decentralization",
"Welfare schemes"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Over the last five decades India has made systematic efforts at removing poverty and increasing the well-being of its citizens through targeted welfare programs (Appendix). Table I gives the budget allocations and expenditure of the welfare schemes run by the central government, as an indicative of the nature of expenditure on welfare schemes in India. Subbarao's (1997) estimation of the marginal odds of participation in India using three poverty alleviation program interventions (the public food distribution system, the public works programme and the rural development) found them to be most effective for those living at the poverty line (roughly the 40th percentile). This lends support for the contention that welfare programs often gainfully redistribute employment within underprivileged groups.\n2. The methodology: Outlining of an optimal strategy was imperative for developing this conceptual paper. Several studies investigate stakeholder dynamics from various perspectives. Earlier sociological studies focus on beneficiary demographics and social divisions (Srinivas, 1976, 1980; Sangave, 1980). Economists investigate welfare scheme performance and poverty impacts (Herring and Edwards, 1983; Ravallion, 1995; Gaiha, 1993). Political science and economics also looks at political influence (Esping-Andersen, 1989; Rose, 1984). A shift towards complex conceptualizations and capture is seen in the studies after 1990s. Post 1990s studies after 1990s capture dynamics like elite capture, bureaucracy-political tensions and negotiated dynamics between social classes, and are the focus for this paper. Papers representing all main stakeholders are reviewed, namely, politicians, bureaucrats, beneficiary and civil society organizations. Political party dynamics are captured as well. Political parties operate at the macro level while decentralization has put politics at the micro level, making the individual politician more significant in the scheme of things. We wanted to look at this interplay. The inclusion/exclusion decision for the papers was taken on two criteria: the paper/document had to explicitly investigate decentralization, and had to include welfare scheme as the overall theme under which decentralization was investigated.\n3. Main stakeholders in a decentralized welfare system: Although great optimism exists about the democratizing capacity of the Panchayats (at multiple levels), literature highlights three interesting aspects where local socio-economics and politics intervenes in the effectiveness of the welfare schemes, namely when dealing with decision making during policy implementation, when dealing with the process of resource redistribution, and the indirect influence of politics through the bureaucracy.\n4. Discussion: Diversified interests and preferences of stakeholders decide their individual course of actions. Within decentralization, we observe stakeholder behavior being driven by several factors external to the traditional social and economic diversities that signifies the Indian sub continent. Studies on the dynamics of decentralization, therefore, look at loss and gain from a particular set of institutional opportunities, in relation to the purpose of the decentralized scheme. For instance, studies show active lobbying at the local level. Similarly, decentralization drives resource allocation to be driven by the local elite and powerful. Several studies establish how the poor and the disadvantaged get limited benefits of participation due to powerful local elite or traditional socio-economic situation of the agrarian societies (Echeverri-Gent, 1992; Heller et al., 2007; Tendler and Freedheim, 1994).\n5. Conclusion: Social and economic disparities were extremely high till the 1980s. Caste hierarchies, authority of the elders and landed class defined the structures of control and their decisions had passive acceptance. However, 25 years back, the government and para-statal institutions started taking several significant steps to meet the disparity in favour of backward regions, socially vulnerable communities and the poor. Institutions of local self governance like Panchayati Raj, and employment schemes like Jawahar Rozgar Yojna were concentrated initiatives in this direction. Coupled with high degree of economic liberalization and privatization, public awareness due to the information technology and telecommunication revolution, and promotion of community based projects by the government in the past decade, the traditional societies have moved towards a social change, i.e. empowerment. Decentralization was meant to promote equality and give voice to all classes of people, by challenging the traditional social structure of control that emphasizes submission to social divisions. However, it has created its own socio-political dynamics.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to provide a background to the special issue on welfare schemes in India. After 25 years of decentralization of governance and structural adjustments implemented in the 1980s and 1990s, have welfare schemes implementation and execution become more accountable and efficient? This paper seeks a critical look at the welfare schemes and its relationship with decentralization and stakeholders' dynamics.\nDesign/methodology/approach: The paper is based on a review of studies post 1990s. Papers representing all main stakeholders are reviewed, namely, politicians and political parties, bureaucrats, beneficiary, and civil society organizations. The inclusion/exclusion decision for the papers was taken on two criteria: the paper/document had to explicitly investigate decentralization, and had to include welfare scheme as the overall theme under which decentralization was investigated.\nFindings: The paper summarizes the new complexities in the system. Stakeholder behaviour is driven by several factors external to the traditional social and economic diversities that signify the Indian sub continent. For example, the authors see the lobbying process shifting to the local level, increasing importance of the local politician and the significance of forming local coalitions and partnerships for better resource allocation.\nOriginality/value: The paper attempts to provide an overview by going beyond a critique of development to focus on the perils of operating within a socio-economically complex society.\n",
"role": "assistant"
}
] |
Collaborative partnerships in an urban environment: working together to benefit all
|
[
"Partnership",
"Higher education",
"Social interaction",
"Urban communities"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Society is filled with challenges and opportunities. The numbers of individuals who are unemployed or unemployable has risen due to the changing economy, advances in technology, needs for specialized skill sets, social inequities, and pure simple bad luck. Many efforts have been employed to help underserved and at-risk populations gain momentum in their efforts to successfully enter the labor market and/or move forward in their earnings potential. Several special interest groups have independently launched efforts to work with at-risk and hard-to-place populations. The success of these groups has been limited as many of them try to launch intervention/training activities with limited resources and less than realistic expectations. The hospitality industry is poised to launch and in some cases continue its efforts to provide training opportunities for the unemployed, underemployed, and those individuals in need of skill refinement and retooling. Unique to the hospitality industry is its broad spectrum of career opportunities from unskilled and semi-skilled positions to para-professional and professional appointments. While hospitality programs all over the world are preparing college graduates for entry-level management opportunities, many of the starting hourly positions are often not filled with qualified and skilled applicants, thus compounding the turnover problem and setting up a failing system.These grim realities, together with the heightened awareness of companies to employ a more diversified work force, the urban academia's intent to partner with the surrounding community, faith-based organization's resolve to be an advocate and guardian of its community's residents, and policy makers desire to actively engage in social activism have all encouraged a sensitivity to help ignite the economy by servicing the individuals comprising at-risk and hard-to-place populations.Independently, each of these service groups of well-meaning individuals has developed programs to address their issues. Higher education has traditionally theorized and measured the potential impact of intervention programs on the community. Faith-based organizations have historically located in areas where these targeted populations reside to offer spiritual guidance, hygiene amenities, hot meals, and hope for the future (Mantone, 2004). Area political environments commonly incorporate the idea of lowering unemployment rates by finding a job for everyone who wants one into their campaign platforms. As the independent service groups begin to meet with some frustration due to limited success and the identification of participant challenges that are beyond the scope of the individual service providers, efforts to organize the resources of each contributing organization to best serve the needs of the targeted population spawned.This group organization, also known as a collaborative partnership, gains importance because the dissolution of an existing problem is greater than the resources of any single stakeholder. Thus, the function of community interdependence binds all participants to the greater goal with individual agendas being secondary to the process. Further, each stakeholder realizes that to ignore the problem would serve only to increase environmental turbulence. Gray (1985) comments \"collaboration is initiated through appreciation of the interdependence which exists and the need for joint \"appreciation\" among stakeholders of the nature and substance of their interdependence\".Need for the study\nLiterature review: A true collaborative partnership\nResults and discussions: A real life example\nA look into the future: Program evaluation and auditing is important for self-reporting and important as a measurement tool of social responsibility (Maurrasse, 2002). Most evaluation criteria is based on accountability to the community, sources of income, and accountability to the individual organizational conscious. For higher education, the income streams may be donors, alumni, or funding agencies. Evaluation is also measured in terms of the lessons learned from the intervention, leverage of support, and role of each collaborative partner.In an era marked by limited funding opportunities and greater emphasis on community involvement, collaborative partnerships are a good way to address community needs that impact each organizational partner. The collaboration partnership stimulates discussion, dialogues, and interaction within and between organizations that recognize opportunities for excellence but realize their limited ability to effect change without assistance. The interdependency of stakeholders is a vital link for the project process and success as it sensitizes the partners to the reality that the challenge/theme/problem is too large for each partner to address independently. That same reality helps each partner to also recognize the value and worth of the resource it brings to the collaborative partnership table.\n",
"role": "user"
},
{
"content": "Purpose: - This paper documents the effort of an urban university working collectively with a faith-based organization, industry partners and the government to better the livelihood of individuals in a community.\nDesign/methodology/approach: - Conceptual paper that reviews current literature and applies it to a case study model.\nFindings: - This conceptual writing offers strategies for group success through discussions of goal setting, benefit articulation, necessary infrastructure, curriculum development, and concludes with an outline of a true case study.\nPractical implications: - Collaborative efforts of higher education, faith-based organizations, government, and industry can work together to establish programs whose intent is to provide educational and training opportunities to at-risk populations.\nOriginality/value: - The value of the paper is in the coordination of resources to provide a service deliverable that will benefit the larger community.\n",
"role": "assistant"
}
] |
A user pre-occupancy evaluation method for facilitating the designer-client communication
|
[
"Building information model",
"User activity simulation",
"Pre‐occupancy evaluation",
"Requirements management",
"Designer‐client communication",
"Virtual reality",
"Modelling",
"Communication"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Although there are varieties of architectural design processes, the common ground is that the building design process usually starts from a briefing stage, and ends as design drawings. Archer (1968) defined design process as four intertwined phases:1. Problem analysis.2. Solution synthesis.3. Evaluation.4. Communication.The communication between different participants goes through the whole design process. Participants in a typical architectural design process usually include architectural designers, clients, and consultants from many other disciplines. During the briefing and design solution development process, the design-client communication is intensive and significant.Apart from preparing the design drawings and demonstrating to the clients during the communication, the architects' duties may include:* to consult the client about significant design issues;* to investigate the feasibility of the requirements;* to advise on the initial brief; and* to advise on development of the brief from the employer's requirements and so on (RIBA, 2000).In addition, the clients' intentions of communication with designers during the briefing and design stage may include:* to specify and develop requirements; and* to review design solutions.The communication between these two groups of people is usually based on the clients' requirements and the designers' solutions (see Figure 1). A Brief is a formal document containing the written instructions and requirements of a client in a construction project (Yu, 2006). In the practice, there is constant interaction between the brief and design proposals (RIBA, 2000).There is a considerable gap between the designers and clients, which is, unlike architects, some inexperienced clients are usually not trained and their comprehension in the three-dimensional space is limited. The difficulties for inexperienced clients to read drawings affect them to specify the brief. Problem also stems from the fact that clients cannot imagine how the design will be emerged after construction phase (Lertlakkhanakul et al., 2008). In addition, it is rather difficult for them to imagine the way that organization's daily activities being accommodated in the new built environment (such as movement patterns).Some other problems would also reduce the efficiency of designer-client communication. In practice, the brief, as a \"yardstick\" to measure the further development of the design, were sometimes changed even after approval (RIBA, 2000). This might either lead to the abandoning of the completed design work, or influence the cost and statutory consents. Thus it is not only essential to start with an identifiable approved project brief, but also need a procedure for brief change control (RIBA, 2000). Kiviniemi (2005) found that there is a lack of mechanism for designers to record, manage and track changes of the clients' requirements during the design stage. Problem also emerged from the client's perspective, which is that although tools or models were developed to help clients define requirements (Akin and Sen, 1995; Hansen and MacLeod, 1996; Kamara and Anumba, 2001), there is a need to establish a framework can not only guide inexperienced clients to define requirements but also to help them review the design solutions against these requirements and collect their feedback during the design development process, especially in the context of virtual built environment.In order to solve the problems mentioned previously, a User Pre-occupancy Evaluation Method (UPOEM) is established to facilitate the designer-client communication during early architectural design stage. In addition, two objectives of this study are defined as follows:1. First, virtual reality technologies will be applied to improve clients' understanding of their built environment. A virtual environment which can demonstrate both the building and users' activities in the built environment will be established. The reason of simulating end-users' activities is that the buildings usually play a key role of accommodating user's organizations and equipment, and enable their activities. Ekholm and Fridqvist (2000) stated that, while the development of bubble diagrams in space planning process, the activities are considered as the criteria to define the size and relationship between these bubbles (representing spaces), but not the building's spaces themselves. Therefore, it is helpful to visualize the end-users' activities and provide a platform for them to evaluate to what extent the built environment could match their requirements.2. Second, a requirements and feedback interface will be designed to facilitate the clients to manage requirements and review the design. This interface is intended to remind clients of the requirements of the given design solution, and guide them to evaluate the design against these requirements. Both the requirements and evaluation results are recorded as the attributes of each room will be saved in a database during the development of the design process.The paper is organized into the following sections. Section 2 introduces the techniques applied in the User Pre-occupancy Evaluation Method (UPOEM), including building information model, user activity simulation, and requirements' documentation. This is followed by the section 3, the components of the UPOEM and a framework for implementation in practice will be introduced. Section 4 introduces a user activity scheduling method used in UPOEM and features of the user activity simulation model. After that, the design and features of the requirements and feedback interface will be demonstrated in section 5. Finally, the conclusions and recommendations for future work will be provided in section 6.\n2. Literature review: To achieve the specific objectives, this UPOEM applied the following techniques, which include building information modeling (BIM) technology for providing accurate building information, user activity scheduling method as the basis of user activity simulation, and a structured requirements documentation method for the basis of the pre-occupancy evaluation. Thus this section introduces the related works from the following three aspects.Features of BIM tools\n3. Design of the UPOEM: Components of the UPOEM\n4. User activity scheduling and simulation: Activity scheduling method applied in UPOEM\n5. Requirements and feedback interface: In the UPOEM, the building performance is considered as the object of pre-occupancy evaluation. In this paper, the evaluation objects focus on the spatial properties of the architectural design. It is because during the designer-client communication, the spatial factors (size, location and adjacency) and visual factors (appearance and view) are the basic factors concerned by during the architects and clients communication. The extension of these evaluation objects to other aspects such as site, building envelope or HVAC system will be discussed in future research. Therefore the following sections will introduce the clients' requirements and feedback database and interface based on the spatial factors.Design of the requirements and feedback database\n6. Conclusions and future work: This paper proposed a BIM-based User Pre-Occupancy Evaluation Method (UPOEM), which aims to facilitate the design-client communication in architectural design process. User activity scheduling and simulation technology was used to demonstrate how end-users' organization will be accommodated in the future built environment, so as to enhance clients' understanding of the design solutions. A clients' requirements and feedback interface was designed to facilitate clients to conduct a pre-occupancy evaluation in terms of spatial and visual factors. A procedure to guide the implementation of UPOEM in practice was also proposed. Thus this UPOEM provided an alternative tool to improve the efficiency of the communication between designers and clients during the process of architectural design.At current stage, the UPOEM was applied in a pilot case study of a campus project. During the implementation of this method, some of the limitations were found which can guide the improvement of this method in further research. First, sufficient time is needed for training clients to interact with the user activity simulation model and the requirements and feedback interface. If the clients cannot control the model smoothly, their understanding of the built environment will be affected, especially under time constraint. Second, the process of modelling the building as well as the end-users' activities is time-consuming, especially for a large organization. Sometimes more details were required by clients, such as natural lighting and material of building elements, which may also increase the time of modelling.In the future research, more case studies will be designed to compare the different impact on clients' performance between applying UPOEM and the traditional communication method. The specific requirements and feedback data of each space instance or type will be integrated with the BIM tools (such as Revit) as attributes of the rooms. In addition, increasing building performance information will be integrated with the user activity simulation model to demonstrate a more realistic built environment.\n",
"role": "user"
},
{
"content": "Purpose: - This paper introduces a building information model (BIM)-based user pre-occupancy evaluation method (UPOEM), which is applied in architectural design stage to narrow the gap between inexperienced clients and designers. It aims to improve the communication efficiency.\nDesign/methodology/approach: - The literature review studied the related technologies applied in the UPOEM, which includes building information modeling, user activity scheduling, and requirements documentation method. Based on the building information model, a virtual environment involving the end-users' daily activities was built up. In this virtual environment, the clients can observe how their organization will be accommodated in the built environment according to their daily work routines. A requirements and feedback interface was designed to facilitate the clients to conduct a pre-occupancy evaluation and store the relevant information into the database.\nFindings: - In briefing and design stages, there is a considerable gap between the inexperienced clients and designers when the clients specify requirements and review the design solutions. In the UPOEM, the user activity simulation model can facilitate clients to obtain a better understanding of the design. The requirements and feedback interface can also help them specify requirements and give comments to the design. Thus it provided an alternative tool to improve the efficiency of the designer-client communication.\nResearch limitations/implications: - This research established a virtual platform which involves both building information and the user activity information. This platform can be further extended by the integration of more building performance simulation tools. Thus it provided a potential to bring the traditional post occupancy evaluation process to the pre-construction stage.\nPractical implications: - The implementation of this method in practice can improve the efficiency of the designer-client communication, and also maximize the clients' satisfaction during the requirements specification and design review process.\nSocial implications: - This study built up a collaborative working environment for both designer and clients. It can strengthen the designer-client relationship during the briefing and design stage.\nOriginality/value: - The user activity simulation and requirements management methods are innovatively applied together to enhance the clients' performance during designer-client communication.\n",
"role": "assistant"
}
] |
Bibliometric differences - a case study in bibliometric evaluation across SSH and STEM
|
[
"Benchmarking",
"Scholarly communications",
"Research impact",
"Societal impact",
"Bibliometric model",
"Knowledge frameworks"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: You may find at least two distinct \"research areas\" in the world of research: the area of Science, Technology, Engineering and Mathematics (STEM) and the area of Social Sciences and Humanities (SSH). With a catching title, Olmos-Penuela et al. (2014) describe issues in evaluating and analyzing research from both areas: \"Are 'STEM from Mars and SSH from Venus'?\" They question the widespread assumption that STEM research is more useful than SSH research, and they find that STEM and SSH research are useful in different ways.\nMethod: Why make this bibliometric comparison between researchers? P. Zunde (1971) put forward three possible purposes of bibliometrics. More purposes can be found (Nicolaisen, 2007), and we have at least four: evaluation of research, modeling of the historical development of science, information search and retrieval and knowledge organization based on bibliographic coupling and co-citation analysis. We propose a fifth, visibility: visibility in the research databases (Aksnes and Rip, 2009; Wildgaard, 2015) and visibility as a prerequisite for accountability. Transparency and accountability as required in contemporary management and maybe reasons why bibliometrics is playing a role in the academic community (Mingers and Leydesdorff, 2015; Moed and Halevi, 2015).\nResults: Researcher X publishes mainly conference articles in proceedings and journals in English. Over the years, there is an ascending tendency, increasing after being appointed as professor and then going down again. The average number of authors on publications is 4.2[13].\nPerformance web[15]: From the radar chart, you get more dimensions to the comparison between X and A. In the first chart, X is evaluated against A and, in the next, it is the other way around.\nDiscussion: How to use the q-index and the performance web, respectively? We have mentioned two of the purposes which may be up front for bibliometric analysis: making research visible and evaluating research. By evaluating research, we often mean that we judge it. That is also what should be the purpose with the q-index. Are one or the other of X and A performing better? But as Spaapen and van Drooge (2011) point out evaluating can also be about learning. Hopefully, we can learn from the performance web to identify strengths and weaknesses. Will you tend to have a week left side of the chart if you are strong on the right side and vice versa?\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to acknowledge that there are bibliometric differences between Social Sciences and Humanities (SSH) vs Science, Technology, Engineering and Mathematics (STEM). It is not so that either SSH or STEM has the right way of doing research or working as a scholarly community. Accordingly, research evaluation is not done properly in one framework based on either a method from SSH or STEM. However, performing research evaluation in two separate frameworks also has disadvantages. One way of scholarly practice may be favored unintentionally in evaluations and in research profiling, which is necessary for job and grant applications.\nDesign/methodology/approach: In the case study, the authors propose a tool where it may be possible, on one hand, to evaluate across disciplines and on the other hand to keep the multifaceted perspective on the disciplines. Case data describe professors at an SSH and a STEM department at Aalborg University. Ten partial indicators are compiled to build a performance web - a multidimensional description - and a one-dimensional ranking of professors at the two departments. The partial indicators are selected in a way that they should cover a broad variety of scholarly practice and differences in data availability.\nFindings: A tool which can be used both for a one-dimensional ranking of researchers and for a multidimensional description is described in the paper.\nResearch limitations/implications: Limitations of the study are that panel-based evaluation is left out and that the number of partial indicators is set to 10.\nOriginality/value: The paper describes a new tool that may be an inspiration for practitioners in research analytics.\n",
"role": "assistant"
}
] |
Quality infrastructure improvement: using QFD to manage project priorities and project management resources
|
[
"Quality function deployment",
"Project management",
"Communication technologies"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: National City Corporation (NYSE: NCC), headquartered in Cleveland, Ohio, is one of the nation's largest financial holding companies, operating through an extensive distribution network in Ohio, Illinois, Indiana, Kentucky, Michigan, and Pennsylvania, and in selected markets nationally. National City's primary businesses include commercial and retail banking, consumer finance, asset management, mortgage financing and servicing, and payment processing. National City employs more than 33,000 individuals.Enterprise Production Services (EPS) is responsible for the availability, integrity and performance of National City's production systems across the enterprise. We are first and foremost a service organization that measures success in meeting or exceeding agreed-upon service level commitments with our lines of business. The Process and Control department within EPS is responsible for managing internal IT improvement and remediation projects. Unlike most projects that deliver a new product or application at the end of the lifecycle, our internal IT projects deliver improvements or will remediate an existing condition in the production environment. Our challenge is to deliver the projects that have the greatest benefit to our internal and external clients while at the same time utilizing similar resources that perform day-to-day production support activities.Quality function deployment (QFD) is a total quality management tool developed in the 1960s by Drs Yoji Akao and Shigeru Mizuno to assure the quality of new products and services. Its first principles include a \"design\" approach, which means to begin at the highest-order requirements and to systematically \"deploy\" downstream to critical details in the design, build, and delivery of the product. This method has been proved effective in software and IT project management applications as an empirical process for defining and prioritizing engineering activities (Gorham and McDonald, 1997).\nQFD process for project selection: With the lack of standardized prioritization across departments, projects were all too often handed out without consideration of other project initiatives or the project workload that was placed on strategic resources. Repeatedly, the resources became overloaded with projects from different managers that all seemed to carry a high priority or no perceived benefit. This allowed the technical resources to determine their own priority and work on project initiatives that enhanced their technical ability and not necessarily that which had the most benefit to EPS, and in turn the external NCC customer. The QFD Green Belt(r) course (QFD Institute, 2000) provided the basis for adapting the process to the needs of our IT department. By selecting criteria that aligned project benefits with the strategic direction of EPS we are able to manage projects that helped to meet or exceed our service level commitments with our lines of business.\nDefining and prioritizing benefits: Our internal customers are EPS managers, team leads, and the technical resources that support the various platforms across NCC, each having their own goals and priorities for maintaining production systems. External customers represent end users of system applications, both internal and external to NCC.Because of the need to determine what criteria made some projects more important than others, we went to the gemba and interviewed technical personnel and managers. The feedback from the technical personnel revealed that they wanted to know the priority for each project so that they could focus their time accordingly. Understanding of priorities would keep them from jumping from task to task, depending on which manager was asking for project tasks that week. On the other hand, management wanted to know that projects were being completed that added benefit to NCC.Our next step was to form a stakeholders' committee comprised of internal customers, which would maintain ownership of the project portfolio, prioritization, and status reporting for the initiatives. The team also had the responsibility for canceling projects, resource management, and placing projects on hold when another initiative of higher priority was in contention with a strategic resource. A new process called \"Quality Infrastructure Improvements\" (QII) was created to manage the data gathering, reporting, and prioritization for these projects. The committee's first task was to determine what benefits internal improvements and remediation initiatives had to the internal and external customers.Since not all benefits have an equal impact on the company, the analytic hierarchy process (AHP) was used to prioritize the benefits. This method has the internal customer comparing the benefits two at a time using a verbal scale from equal to extremely more important. This takes advantage of how people best make judgments - in pairs using a natural language ordinal scale. AHP then converts these into numerical ratio scale priorities that accurately represent what matters most and by how much (see Figure 1).This ranking signifies that project initiatives that maintain the existing production environment should be considered before other initiatives, whereas improvement and proactive initiatives would place lower on the prioritization list and only become active when resources were available or if they also had a production issue or an information protection (IP) security, regulatory, or audit requirement. Table I explains the characteristics of the two highest ranking project benefits.\nDefining and prioritizing complexity: The next obstacle was to understand what resource should own an initiative to see it through the project lifecycle. After much discussion and brainstorming, the stakeholder committee determined three criteria that influenced the effectiveness of a project owner to successfully complete an initiative. The three criteria are characteristics of complexity within the production environment and often play a major role in the success of a project, as shown in Table II. Similar to the benefits, the relative degree of complexity was quantified using the AHP, as shown in Figure 2.\nBenefits versus complexity: The allocation of a project owner is based both on how important the project is (benefits to the company) as well the complexity of the problem. To better understand these relationships, Figure 3 was constructed.\nStrategic resource allocation guidelines: Next, we set guidelines for assigning a project owner resource to projects, as follows.1. Project management:* characterized by medium-high benefits and medium-high complexity;* these projects may be regarded as essential to EPS;* these projects mandate project management methodologies be followed to ensure on-time and on-budget completion; and* the high-benefit and high-complexity nature of these projects require project management focus.1. Component owners:* characterized by low-medium benefits and medium-high complexity;* normally small, short-term projects that require strong technical skills;* component owners will function as technical team leads since minimal integration is required; and* minimal project management knowledge is needed.1. Team leads or group managers:* characterized by medium-high benefits and medium-low complexity;* these projects are usually process improvement efforts to support production;* minimum integration across functional lines is necessary, which allows team leads and group managers to function as project managers; and* short time frames also characterize these projects.1. Infrastructure maintenance, WebOps, or midrange groups:* characterized by low-medium benefits and low-medium complexity;* these projects are usually identified by technical experts but executed by technical project coordinators; and* one project coordinator may end up managing multiple small projects.1. Selectively assign:* these projects are potentially on the boundary between multiple quadrants;* they require a better understanding of the project benefits, complexity, and resource constraints in effect at the time to properly assign the necessary resources; and* these initiatives will need to be selectively assigned because of their benefits, complexity, and resource constraints.\nProject Definition Form (PDF) and scoring projects across departments: To manage the information gathered during initial project definition, a web form was created to simplify entry and tracking of new initiatives. The Project Definition Form (PDF) allows users internal to EPS as well as those from other divisions to enter pre-defined criteria. The requestor completes a high level description of the project along with information that relates to the characteristics of the project's benefits and complexity. The PDF is then reviewed by all members of the QII committee for general understanding of the project and how it may impact their department and NCC. Each member then responds to an electronic survey that records scores for each benefit and complexity based on a nine-point scale. This information is tallied and reviewed at the weekly QII meeting to facilitate discussion and score each benefit and complexity on a nine-point scale as a collaborative effort.\nPrioritization of projects based on benefit and complexity using a matrix: The prioritized benefits and complexity were entered into the columns of a matrix and the projects were entered in the rows. The degree of benefit and complexity of each project were rated in the intersections using a nine-point scale. These rates were then multiplied by the priorities of each benefit and complexity, and separately added across to yield the absolute weights, which were then classified as high, medium, or low depending on their scores. A portion of the matrix is shown in Figure 4.\nNext steps: Projects are added into the QII process as the need arises. Currently we are adding 5-6 projects monthly. The QII team reviews all projects through the electronic PDF and then they are ranked prior to the weekly meeting. During the meeting we review new initiatives and rank them according to benefits and complexity, and assign a project owner accordingly. We also review active projects that submit a biweekly status report which reviews milestone completions, budget, resource, and timeline constraints.To keep the project prioritization matrix up to date with the strategic direction of NCC, the benefits and complexity will be reviewed annually and re-weighted accordingly. This will allow for some projects to be re-prioritized to meet the new goals and objectives of EPS and NCC.The QII team also plans to measure the success of the completed initiatives and tie back the value added by completing them within the desired time frame. We also have an opportunity to look at the initiatives that are not being completed and determine the strategic resource that limits the number of initiatives that could be managed by that resource. We then can make recommendations to management for additional resources to accomplish the lower priority initiatives or find other internal or external resources to fill the capacity need.We have also used the QFD process in a project risk initiation assessment. The assessment used QFD to gather risk factors for larger IT projects across NCC. Departments from Project Services, Engineering, Client Services, and Business Consulting gathered hundreds of risk factors which were analyzed and grouped into five categories, and then used the AHP process to weight each characteristic of the five categories. Next, each category was ranked to assign an overall project initiation risk score which will be used in conjunction with the cost benefit analysis to help determine project portfolio selection. The risk assessment will also be used as a foundation for a risk mitigation plan when the project becomes active.\nConclusion: QFD helped us to manage internal initiatives by prioritizing them by the benefits they had to EPS. We are now able to distinguish between pet projects and those that have the highest payback to NCC and assign appropriate resources to complete the initiatives within an acceptable time frame. This allows for both project management and technical resources to schedule their time according to priority (Kerzner, 2001; Kendal and Rollins, 2003), which reduces non-effective multitasking and will allow for more internal initiatives to be completed in the long run.\n",
"role": "user"
},
{
"content": "Purpose: - A common difficulty for an information technology (IT) department is to focus resources (people) where they can deliver the greatest benefit for the efforts made. It sometimes happens that a bias develops where some departments insist that their projects are more critical than others and they demand not only that their projects be attended to immediately, but also that the most senior people be assigned to them. To better utilize resources, it makes sense to prioritize projects on their benefit contribution to internal and external customers, as well as to assign skill-appropriate people to work on them. National City has applied QFD to help them in identifying and prioritizing the needs of their customers and then using these to evaluate each project for its benefit contribution and for its degree of complexity, which will help assign appropriate resources to the project.\nDesign/methodology/approach: - This paper will show how we customized the QFD process through the QFD Green Belt(r) training of the QFD Institute. It is shown how the list of internal customer needs, which became the criteria for determining project benefit, and then developed another set of criteria to judge the project complexity and the required technical skill level to work on the project, were developed. The paper includes charts and matrices defining the process.\nFindings: - National City can now prioritize its internal IT projects and assign the most appropriate people to them in order to deliver the greatest value to National City's customers.\nOriginality/value: - QFD helped National City to manage internal initiatives by prioritizing them according to the benefits they had. Project management and technical resources can now schedule their time according to priority, which reduces non-effective multitasking and will allow for more initiatives to be completed in the long run.\n",
"role": "assistant"
}
] |
The role of top management team's information technology (IT) infrastructure view on new product development: Conceptualizing IT infrastructure capability as a mediator
|
[
"Communication technologies",
"Senior management",
"Product development",
"Development costs",
"Product quality"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Product life cycles are getting shorter, making the introduction of new products at the right time and with features that satisfy customer expectations in terms of quality and functionality essential for firms to stay competitive and to survive. Therefore, understanding how to achieve an effective new product development (NPD) process is vital for corporate success. However, enhancing NPD process performance is not easily accomplished because the efficiency of an NPD process is affected by many factors. Among them is a firm's information technology (IT) (see Iansiti and Clark, 1994; Barua et al., 1995; Verona, 1999; Young, 2000; Turrell, 2003; Farrell, 2003; Murray, 2003; Nambisan, 2003). Nevertheless, the term IT is too broad and refers to a combination of IT infrastructure, human IT resources, and IT-enabled intangibles (Bharadwaj, 2000). Hence, identifying the benefits achieved by a firm's IT will not be too helpful if these effects are not decomposed into separate dimensions. Correspondingly, research is warranted that identify the nature of the relationships between separate IT components and the NPD process.Among the IT components described, IT infrastructure, defined as the combination of reliable services that are shared throughout the firm and coordinated centrally, usually by the firm's IT department as well as the tangible assets such as hardware platforms, operating systems, network and telecommunications technologies (e.g. e-mail, instant messaging, and groupware), software applications, databases, intranet, and the Internet (La Belle and Nyce, 1987; Ross et al., 1996; Sambamurthy and Zmud, 1999; Sethi et al., 2003; Von Simson, 1990; Weill and Broadbent, 1998; 2000), comprise the majority of a firm's IT budget. For example, in the US, firms with some degree of sophistication of top management team (TMT) view on IT infrastructure spend between 37 per cent and 50 per cent of their total IT investments in infrastructure (Weill and Broadbent, 1998).Lately, firms are ramping up their IT infrastructure investments in hopes of enhancing their NPD program outcomes. For example, in a recent interview, Microsoft's Chief Information Officer, Stuart Scott, notes that his firm now spends 45 per cent of its IT budget on supporting NPD efforts, a significant increase from the 30 per cent that was in the past (Murphy, 2007). Furthermore, the company's Chief Operating Officer, Kevin Turner, contends that the 45 per cent figure will increase to 65 per cent of the firm's IT department budget in the near future (Murphy, 2007). Hence, the management's challenge is maximizing the fit between current strategic investments and the firm's future opportunities (Moorman and Slotegraaf, 1999). Therefore, IT infrastructure is a key component that requires scholarly attention. Unfortunately, despite its importance, its critical antecedents and how it influences firm's processes, such as NPD, is not yet investigated. More specifically, in this manuscript, I examine the influence that sophistication of TMT's view has on IT infrastructure.The contribution of this study is twofold. First, I open the black box of how TMTs can influence NPD process performance by their perspectives on their firm's IT infrastrucutres. Past research has shown that there is a relation between the firm's TMT and its NPD process. In fact, one major driver of NPD project timeliness is found to be top management support (Cooper and Kleinschmidt, 1994). Moreover, meta-analyses of NPD literature consistently highlight a strong influence of TMT's direct support of NPD activities on NPD performance (see Montoya-Weiss and Calantone, 1994; Henard and Szymanski, 2001). However, no study has investigated how TMT decisions that are seemingly unrelated to NPD, such as the ones regarding IT infrastructure, could affect the firm's NPD process performance. This study fills this gap. In doing so, I draw from the resource-based view (RBV) of the firm (Wernerfelt, 1984; Barney, 1991) and conceptualize IT infrastructure as a capability of the firm that acts as a mediator between the sophistication of TMT views on IT infrastructure and the NPD process performance.The second contribution concerns the payoffs from IT infrastructure investments. IT infrastructure investments are long-term in nature and provide indirect returns, which makes their justification difficult. One way to demonstrate the payoff is to draw the link between the IT infrastructure and the improvements in the firm's processes (Broadbent et al., 1999). One such process is the NPD process. Drawing the link between the NPD process and IT is an appropriate undertaking in order to justify investments in IT infrastructure because NPD process is argued to be an emergent knowledge process that a firm's IT should support (Markus et al., 2002). Markus and colleagues define emergent knowledge processes as an organizational activity pattern with an emergent process of deliberations, an actor set with unpredictability in terms of job roles and sufficiency of knowledge and skills for the tasks, and in need of extensive knowledge sharing. Consequently, technological capabilities are important drivers of product development outcomes (Verganti, 1999).To summarize, this paper investigates the following research questions:RQ1. How does the sophistication of TMT's IT infrastructure view influence a firm's IT infrastructure capability?RQ2. How does IT infrastructure capability influence the NPD process performance?The remainder of the paper is organized as follows: The next section presents a review of literature on the NPD process performance and describes IT infrastructure dimensions. Next, drawing from the resource-based and knowledge-based views, the conceptual model that links the sophistication of TMT's view on IT infrastructure to the NPD process performance measures of cost, cycle time, and quality are presented. Lastly, several conditions that moderate the strength of the proposed relationships are delineated. The paper concludes with a discussion on how the provided theoretical framework extends our understanding and suggests paths for future research.\nTheoretical background: NPD process performance and IT infrastructure as a capability: NPD process performance\nConceptual model and research propositions: Figure 1 illustrates the conceptual research model that relates TMT's view on IT infrastructure to NPD process outcomes and examines the mediating role of IT infrastructure capability in this relationship.A crucial antecedent to IT infrastructure capability: sophistication of top management team's IT infrastructure view\nEnvironmental moderators of the IT infrastructure capability-NPD process linkage: Certain contingencies might moderate (i.e. increase or decrease) the strength of the relationship between IT infrastructure capability and the NPD process. One such environmental moderator is the intensity of competition. In the absence of competition, a firm would perform well regardless of whether it introduces new products quickly or not (Kohli and Jaworski, 1990; Jaworski and Kohli, 1993). Moreover, the firm might reflect the cost of developing the new product on the sales price of the product. By contrast, in a competitive environment, firms must satisfy customers' changing needs more quickly and at a lower cost than their competitors. Thus, firms possessing more extensive IT capabilities would better improve their NPD process performance. As a result, IT capability is expected to have a greater effect on NPD process effectiveness in markets with intense competition. The discussion above suggests the following:P3a. The greater the competitive intensity, the stronger the relationship between IT infrastructure capability and NPD process performance.Another environmental factor posited to moderate the relation between IT infrastructure capability and the NPD process is technological turbulence, which is defined as the rapid change in the entire process of transforming inputs to outputs and the delivery of those outputs to the end customer (Kohli and Jaworski, 1990; Jaworski and Kohli, 1993). The IT infrastructure capability of firms that operate in industries where technology is almost stable would not affect the NPD process as strongly as the IT infrastructure capability of firms competing in industries with higher rates of technological change. More specifically:P3b. The greater the technological turbulence, the stronger the relationship between IT infrastructure capability and NPD process performance.The third environmental characteristic that could influence the relationship between IT infrastructure capability and NPD process is market turbulence, which is defined as the changes in the composition of customers and their preferences and needs (Kohli and Jaworski, 1990; Jaworski and Kohli, 1993). Firms competing in more turbulent markets should modify their products and services continually and launch new products more frequently in order to serve the changing needs of the customers. Greater IT infrastructure capability is a necessity for firms to be able to respond more rapidly to changes in the marketplace (Quinn, 1992). As a result, firms competing in more turbulent markets are likely to have a greater need for a higher IT infrastructure capability in comparison to firms in more stable markets. This leads to the following proposition:P3c. The greater the market turbulence, the stronger the relationship between IT infrastructure capability and NPD process performance.\nDiscussion and conclusion: This paper aimed to broaden the understanding of how TMT views on IT can influence NPD outcomes. First, based on extant literature and borrowing from the resource-based and knowledge-based views, it was argued that sophistication of TMT view on IT infrastructure will significantly enhance the IT infrastructure capability. Moreover, this paper contributes to both Information Technology and NPD literature by drawing the distinction between IT infrastructure capability from IT capability and by suggesting that the more extensive a firm's IT infrastructure capability, the more effective is its NPD process. IT infrastructure capability enhances NPD process effectiveness by reducing the cycle time and cost of NPD and improving the NPD process quality. Lastly, three moderating factors, the intensity of competition, technological turbulence, and market turbulence, were posited to affect the strength of the relationship between a firm's IT infrastructure capability and the efficiency of its NPD process.The task of identifying the path from IT investments to business value has been a major challenge for scholars. Some studies therefore utilized \"process-oriented views\" to examine IT investments on intermediate business processes (Barua et al., 1995; Mooney et al., 1995). Although the relationship between information technology and the NPD process has been a focus of several studies, there is still a significant gap in the current literature examining how IT infrastructure influences NPD process effectiveness. Hence, managers are largely left in the dark in terms of understanding how the large sums of money they invest in IT systems can facilitate their firm's NPD efforts. In addition to the theoretical contributions indicated above, this research has managerial implications. By adopting an enabling IT infrastructure view, businesses and IT managers can create IT infrastructure capabilities, which would in turn improve their NPD processes. Moreover, our research prescribes two levers to managers that aim to improve their processes: IT infrastructure functionality and scope. For example, recently, MasterCard spend $160 million in IT and hopes that this will improve its speed to market. Our framework suggests that cycle times will improve if these investments extend either the functionality or scope of the firm's IT infrastructure.Since this research is conceptual, it does not empirically test the propositions. As such, future research should conduct empirical analysis of this paper's propositions, including developing measurements by assuring reliability, validity as well as a full-scale test of the conceptual model. For example, the construct \"Sophistication of TMT's IT infrastructure view\" can be conceptualized as a categorical variable with four levels: no view, utility view, dependent view, and enabling view. Reflective items measuring a firm's IT infrastructure investment relative to industry average; the proportion of IT infrastructure investments in IT department's overall budget; and whether or not cost only, a balance of cost and expected flexibility, and option values are considered when justifying IT infrastructure investments can be deployed in capturing the increasing levels of sophistication categories. Moreover, IT infrastructure dimensions of functionality and scope can be measured by items borrowed from Tallon et al. (2000). For example, IT infrastructure functionality dimension can be captured by items probing on the transparency of IT applications and services, the extent to which systems can handle multiple applications, and the use of commonly agreed IT standards. Further, IT infrastructure scope can be measured by items inquiring about the levels of compatibility, modularity, and scalability of the firm's IT infrastructure. Firms scoring high on these levels will have IT infrastructures with higher reach and range. Finally, when empirically testing the proposed model, scholars should also control for firm size and industry since IT use differs in small-sized firms and varies across business sectors (Kagan et al., 1990).The model presented in this paper is simple and it considers only three moderating effects on the relationship between IT infrastructure capability and NPD process efficiency. Future research should focus on extending the theoretical work presented here by investigating whether other factors moderate this relationship. Moreover, scholars should examine what strengthens or weakens the relationship between the sophistication of the top management's view on the firm's IT infrastructure and IT infrastructure capability. One such variable is industry type as Cash et al. (1988) argue that IT is more critical to a firm's strategy in certain industries than others.Another extension to this study would be to examine IT infrastructure capability and the NPD process relationship from a transaction cost economics perspective (Williamson, 1981). It can be posited that a more extensive IT infrastructure capability would reduce the transaction costs between the firm and its suppliers, and thereby improving NPD process efficiency primarily by reducing the cost of the process.Geographically dispersed NPD teams are being increasingly utilized by multinational firms such as IBM. However, these teams face significant challenges to overcome the differences in culture, work habits, and time zones. Based on a case study of a large scale software development project at IBM, Boutellier et al. (1998) argue that intensive IT use mitigates some of the hurdles that can be present in multi-national product development projects. Therefore, another area for further research could be investigating the moderating role of NPD team virtuality or team member proximity between the IT infrastructure capability and NPD process performance relationship.Lastly, although the proposed model in this paper contributes to the literature by drawing the link between top management's view on IT infrastructure and the relation between IT infrastructure capability and the NPD process, it does not consider the implications of adopting an extensive IT infrastructure on firm performance. As a result, scholars should consider going one step further and include firm performance as the final outcome of the influence of the IT infrastructure view when testing this model or its variations empirically.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper it to investigate how sophistication of top management view on information technology (IT) infrastructure influences the firm's IT infrastructure capability and the effect of IT infrastructure capability on new product development (NPD) process outcomes such as cost, cycle time, and quality.\nDesign/methodology/approach: - Drawing from research-based and knowledge-based theories, a conceptual model on how IT infrastructure influences NPD process efficiency is developed.\nFindings: - The paper demonstrates that sophistication of top management view on IT Infrastructure enhances NPD process through its effect on IT infrastructure capability. IT infrastructure capability enhances the NPD process efficiency by reducing the cycle time and cost of NPD projects and improving the NPD process quality.\nPractical implications: - It is useful for practitioners to know that by adopting an enabling IT infrastructure view, managers can create IT infrastructure capabilities that could improve their NPD processes significantly.\nResearch limitations/implications: - Future research should conduct empirical analysis of this paper's propositions and could examine what moderates the relationship between the sophistication of the top management's view on the firm's IT infrastructure and IT infrastructure capability. Scholars could also include firm performance as the final outcome of the influence of IT infrastructure view when testing the proposed model empirically.\nOriginality/value: - Recent research pointed out that IT enabled the development of efficient new processes. This paper extends this argument by conceptually investigating how IT infrastructure enhances NPD process of a firm.\n",
"role": "assistant"
}
] |
Brand as a cognitive mediator: investigating the effect of media brands as a structural feature of textual news messages
|
[
"Media",
"Emotions",
"Brand",
"LC4MP",
"Psychophysiology"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The digital era has introduced profound changes in the ways people consume news. First, the news is increasingly consumed using electronic devices and thus in a different habitual and technological context than previously. Second, publishing content and establishing media channels has become technically easy, which has led to an increase in the number of media sites or media-like sites. Third, a growing number of people obtain their news through social media platforms, curated by their peers. As a consequence of these developments, the competition among different media has dramatically increased, which has put the media under mounting pressure to find new ways to differentiate. As with other products, one of the main tools for media corporations to differentiate their products is branding (Chan-Olmsted, 2006; Tungate, 2004).\nTheoretical background: The limited capacity model of motivated media processing\nMethodology: The current study used psychophysiology, a novel method used to study the effects of media brand knowledge - a structural feature of the message - on the emotional and cognitive processes activated during news reading. Self-reports, facial EMG and EEG were used to assess emotional valence, emotional arousal and attention. Stimulus material consisted of 42 news messages, which varied along the valence (i.e. pleasantness) and involvement (i.e. relevance to the reader) dimensions and were framed with different media brands that varied along the dimensions of familiarity and credibility. The following sections explain the variables and manipulations and the experimental procedure.\nResults: Manipulation check\nDiscussion: This study investigated the moderating effect of media brand knowledge in the processing of textual news messages. The findings suggest that the media brand acts as a structural feature of the media message and influences the processing of the news. This effect, however, is shown in the memory tests and self-reported arousal ratings, but only marginally in the physiological data. Hence, it seems the influence of the brand is less physiological and more related to cognitive considerations than unconscious emotional responses. However, several other findings emerged in the psychophysiology: we confirmed the notion of negativity bias in relation to news message valence, and highlight the importance of the content relevance to the reader's attention, self-reported arousal and memory.\n",
"role": "user"
},
{
"content": "Purpose: This study investigates how media brand knowledge, defined as a structural feature of the message, influences emotional and attentional responses to, and memory of, news messages.\nDesign/methodology/approach: Self-reports, facial electromyography (EMG) and electroencephalography were used as indices of emotional valence, arousal and attention in response to 42 news messages, which varied along the valence and involvement dimensions and were framed with different media brands varying along the familiarity and credibility dimensions.\nFindings: Compared to the no-brand condition, news framed with brands elicited more attention. The memory tests indicated that strong media brands override the effect of involvement in information encoding, whereas details of news presented with Facebook were not well encoded. However, the headlines of news framed with Facebook were well retrieved. In addition, negative and high-involvement news elicited higher arousal ratings and corrugator EMG activity. News framed with familiar and high-credibility brands elicited higher arousal ratings.\nResearch limitations/implications: Relevant for both brand managers and audiences, the findings show that building credibility and familiarity both work as brand attributes to differentiate media brands and influence information processing.\nOriginality/value: The results highlight the importance of media brands in news reading: as a structural feature, the brand is used as a proxy to process the message content. The study contributes by investigating how the type of source influences the reception and encoding of the mediated information; by investigating the emotional effects of brands; and by confirming previous findings in media psychology literature.\n",
"role": "assistant"
}
] |
A comparative analysis of the Taguchi and Shainin DOE techniques in an aerospace environment
|
[
"Taguchi methods",
"Experimental design",
"Aerospace industry",
"Composite materials"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: Statistical quality improvement techniques such as design of experiments and Taguchi methods form an essential part of the search for improved product performance. These techniques are finding greater prominence in industry through the development and implementation of the Six-Sigma strategy (Antony et al., 2003). However, the effective implementation of the Taguchi experimental design technique within industry can be considered to be poor. Companies cite the complexity of the technique as being the major limiting factor as to its use.It has been argued that the DOE technique developed by Dorian Shainin provides a powerful yet simpler approach that lends itself to easier implementation in an industrial environment. This paper initially describes both the Taguchi DOE technique and Shainin's key variables search technique (KVST) and then goes onto compare the accuracy, validity and user friendliness of both approaches through the development of a case study. Its aim is to provide a new perspective on the application of different DOE techniques so as to stimulate greater use and development of these statistical methods in industry.\n2 Background to DOE techniques: The statistical approach to design of experiments and the analysis of variance (ANOVA) technique was developed by R.A. Fisher back in the 1930s. The technique provides a very powerful and economical method to determining significant factors and factor interactions that affect variability within a product. This classical approach to the design of experiments was further developed in the late-1980s when Genichi Taguchi developed his own form of statistical design of experiments.Whilst the work of Fisher and Taguchi is relatively well known in modern manufacturing industry, the work of Dorian Shainin receives less acclaim but his non-academic statistical technique is gaining greater prominence. The Shainin DOE technique permits the luxury of considering as many variables as can be identified (Bhote, 1991). The subsequent grouping of these variables into \"families\" and identifying the most influential variables based on statistical significance as opposed to assumptions provides a greater chance of identifying the major factors of variance. Shainin identifies and categorises the major factors contributing to variance as Red X, Pink X and Pale pink X (Goodman and Wyld, 2001). Red X being the major factor causing variance, Pink X being the second factor and Pale pink X being the third.This paper develops both the Taguchi DOE and Shainin's KVST by conducting experiments on identifying the influencing variables that control the joint strength of honeycomb composite tongue and slot joints for aerospace applications.2.1 The Taguchi DOE technique\n3 Application of Taguchi DOE and Shainin's KVST: 3.1 Introduction\n4 Experimental design using the Shainin KVST: The Shainin KVST begins its approach by identifying the study objectives, in this case these are:* To identify the key variables that influence the joint strength performance and to distinguish them from the variables that have the least influence.* To adjust the tolerances of the least influential variables in order to reduce cost of manufacture (Antony, 1999).* To identify Red X (the major factor affecting joint strength), pink X (the second major factor affecting joint strength) and Pale pink X (the third factor affecting joint strength) from the experiments constructed for two load orientations.* To determine the optimal settings for each of the key variables highlighted above.* To identify any significant two-order main effect interactions that are likely to impact on joint strength performance.The Shainin technique follows the five-step methodology Anthony (1999) and is shown in Section 2.2. Each of the steps identified in this section are developed further in the next section.4.1 Step 1 Develop and test a list of input variables\n5 Analysis of results and evaluation of the techniques: The implementation and subsequent use of experimental design techniques within an industrial environment relies primarily on the ease of application of the technique and its ability to provide accurate and detailed information to the user. The accuracy of the information should be further supported by the ease in which the information supplied to the system can be validated at various stages during the application of the technique so as to reduce time and effort during experimentation.Through the application of both the Taguchi and Shainin DOE techniques, it has been possible to analyse and qualify both the ease of use and the accuracy of information obtained from the techniques. The following conclusions have been drawn:* The Shainin technique showed increases in joint strength over and above the increases identified with the Taguchi technique. A 2.8 per cent increase in strength was shown on the direct tensile load orientation and a 5.4 per cent strength increase was seen on the transverse load orientation.* Both techniques provided different results with regards to factor/variable level settings. Both techniques, however, were able to highlight the least statistically significant factors/variables and this was used to adjust the levels accordingly without affecting joint strength to any appreciable extent. This also allowed for the achievement of a single joint construction in both cases. Table XVIII shows the factor/variable level settings for each load case. Factors in italics on the Taguchi table show them as the least significant factors with blanks on Shainin table signify variables can be set at any level due to them being unimportant to the joint.* The development of a single joint construction capable of providing increases in strength for all load orientations is important for this engineering application. A single joint construction provides design flexibility and improves manufacturing efficiency through reducing set-up times, etc. Through adjusting the least significant variables/factors it was possible to find a single joint construction that could be used for both load orientations whilst still providing joint strength increases in these load orientations.* The Shainin technique was significantly easier to implement and to obtain results. The use of this simple approach enabled the experimentation to take approximately 75 per cent less time to complete and utilised 50 per cent fewer human resources. However, the amount of testing required to obtain realistic results was some three times greater through employing the Shainin approach. This was down to the need to undertake full factorial experimentation. This issue is critical to the management of experimental design work since the amount of testing could have a significant impact on R+D costs.* The application of Taguchi's robust parameter design allowed the team to measure and evaluate the variation in the strength readings. This piece of system \"additionality\" was of particular importance since it allowed the user to identify if the variation in strength could be adversely affected even though the break strength of the joints increased.* The validity of the results emanating from the Taguchi approach was somewhat suspect. The Taguchi technique initially computed a joint construction that was impossible to manufacture (as seen in the transverse load orientation) whereby the identification of a tongue length that was too large for the slot length.* Table XIX shows the relative merits of both experimental design techniques as applied to the application highlighted in this paper.\n6 Conclusions: This paper has compared the operational effectiveness of the Taguchi and Shainin DOE techniques through their application to improving the joint strength of an aerospace structural problem. The following are general conclusions that have emerged from the experimentation:* The Shainin technique allows for the quick and simple identification of the three major factors that influence the system, i.e. Red X, Pink X and Pale pink X. This is achieved through the use of a systematic methodology based around full factorial experimentation. It can be considered primarily as a problem solving technique in which it attempts to identify the primary root causes of the problem (De Mast, 2004).* The Shainin technique relies heavily on using engineering judgement and simplistic statistical tools to \"funnel\" down the factors to a point where full factorial experimentation can be undertaken. In fact, the Shainin technique relies very much upon the correct identification of variables but moreso, in the ranking and subsequent allocation of those variables into the experiment. The Shainin technique, however, provides the ideal opportunity to solve problems quickly and effectively by allowing the analyst to change the important factors that affect the mean response of a system whilst allowing for cost effective action to be taken on the unimportant factors (i.e. to set them at the most economical level since they have little effect).* In an industrial environment, the Shainin technique provides a powerful and reliable method of experimental design that is easily implemented throughout the organisation. It is highly user friendly and can quickly identify the main effects through full factorial experimentation. The use of interaction plots can easily and quickly highlight interactions. Its weakness, however, is in the skill and knowledge required to firstly correctly identify the variables and then secondly, to allocate those variables to the experiment. This relies on pre-experimental analysis to funnel down the variables to a full factorial level.* The Taguchi approach provides for a powerful product/process enhancement capability where system robustness can be created through the identification of a wider range of influential variables that can affect systems output. The Taguchi technique provides its strength through being able to accurately identify main effects and interactions through partial factorial experimentation. This approach will find relevance in an industrial setting particularly where a large number of factors are considered critical for the experimental process. More complex than the Shainin approach, it is likely to be utilised by engineers and managers with a mathematical background. Its strength in partial factorial analysis is also its weakness when it comes to accuracy of factor and interaction identification simply because partial factorial analysis does not test every experimental combination.\n",
"role": "user"
},
{
"content": "Purpose: - To highlight the application and to compare the effectiveness of the Taguchi and Shainin experimental design processes as applied to aerospace structural components.\nDesign/methodology/approach: - This paper applies both the Taguchi and Shainin experimental design techniques to optimizing the design of honeycomb composite joints. The techniques are fully applied, the results analysed and their user friendliness is assessed.\nFindings: - This paper identifies an optimum parameter setting for composite joints gained from applying these experimental design techniques. Significant improvements in joint strength are achieved through identifying a new joint setting.\nPractical implications: - The adoption of the experimental design techniques outlined in this paper and their application to a real engineering problem will enable a company to apply the techniques and to attain improvements in terms of cost and quality.\nOriginality/value: - The analysis of both the Taguchi and Shainin methodologies and the resulting conclusions as to their effectiveness for industry is the real value of this paper. This paper will be valuable for quality professionals, design engineers and manufacturing specialists in a wide range of industries.\n",
"role": "assistant"
}
] |
Precise versus imprecise promotional rewards at small probabilities:: Moderating from purchase value and promotion budget
|
[
"Ambiguity",
"Small probability",
"Promotion",
"Purchase value",
"Promotion budget",
"Promotional methods",
"Consumer behaviour"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Consumers are often dazzled with various forms of sales promotions. For example, imagine that you were considering buying a car and after an extensive search you felt rather confident about your choice and went to the market. Afterwards, you might be even astonished to realize that there existed diverse forms of promotions offered by different brand dealers. One might be a sure gift such as GPS, one might be a sweepstake for a small chance of full cover of an admirable tour, the other might be the random exposure to small opportunities among different rewards including diamond, air ticket, five-star hotels, etc. Unconsciously, your final choice might be altered. The vignette suggests a vital research problem: how consumers respond to prize promotions differing in gaining probability, reward value, as well as the precision of the reward.\nAmbiguity effect: The term of ambiguity is used to distinguish the class of decisions under uncertainty for which the odds or payoffs are not precisely known. For example, assessing the probability of winning a reward at 30 percent is unambiguous (i.e. there is exactly a 30 percent chance of winning and 70 percent chance of failing), but ambiguity exists when an individual can only infer that the success probability lies between 10 percent and 50 percent.The study of ambiguity in decision-making has primarily focused on probabilistic ambiguity (e.g. Ellsberg, 1961; Kahn and Sarin, 1988; Viscusi and Magat, 1999; see Table I as a review). In a seminal study, Ellsberg (1961) demonstrated that subjects often prefer to bet on the event that a red (or a black) ball will be drawn from an urn containing 50 red and 50 black balls than on the event if the ball would be drawn from an urn containing an unknown proportion of red and black balls, which violates the expected utility theory that prescribes that the two options should have equal utility. Einhorn and Hogarth (1985) present a descriptive model showing that people are ambiguity seeking at small probabilities (e.g. p=0.001) and ambiguity avoidance at moderate to high probabilities (e.g. p=0.5). Curley and Yates (1985) and Viscusi and Chesson (1999) also find that ambiguity avoidance increases with the expected probability of winning.Outcome ambiguity is also prevalent in managerial settings and may influence consumer decisions (Ho et al., 2002). In promotion strategies for instance, when the lottery-like promotions are stated imprecisely, such as \"30-50 percent discount\" or \"a $1,000-3,000 worth of prize\", from the consumer's point-of-view the exact payoff cannot be ascertained in advance. Pioneer work by Dhar et al. (1995, 1999) find that consumers prefer \"25 percent probability of saving 70-90 percent\" over the same promotion with a precise level, i.e. \"25 percent probability of saving 80 percent\", while the converse will happen when the probability level is 80 percent. Their empirically comparison has been short of theoretical explanations and it is unclear about its generalizability at small probabilities. In addition, there is little work on characterizing outcome ambiguity decision processes (see Casey and Scholz, 1991; Gonzalez-Vallejo et al., 1996) and the risk levels in these outcome ambiguity research are still normal (i.e. above 30 percent), though some previous studies support the idea that preference manners in the two dimensions might be congruent (Gonzalez-Vallejo et al., 1996; Kuhn and Budescu, 1996; Ho et al., 2002). Considering the small probability's popularity in promotions, it is important to examine outcome ambiguity framed within small probabilistic promotions. Following the idea that preference patterns in the probability and in the outcome ambiguity might be congruent, we hypothesize an ambiguity-preference pattern for outcome ambiguity at small probabilities:H1. When the probability of getting the promotional reward is small, promotion with imprecise reward performs better than that with precise reward.\nCertainty effect and overweighting of small probabilities: In decision making under risk, people overweight outcomes that are considered certain relative to outcomes that are merely probable, which is referred as the certainty effect (Kahneman and Tversky, 1979). The certainty effect suggests that promotion at 100 percent probability might be more attractive than that at 80 or 25 percent probability, when the promotional rewards are stated precisely.Another important feature of decision making under risk is that people tend to be risk-seeking when they face a small chance of winning a large amount, and risk-averse when facing a small chance of losing a large amount, which is generally referred as \"overweighting of small probabilities\". This can be illustrated by considering the choice of whether to receive a sure gain of 5, or to take a risky option with a 1 in 1,000 chance of winning 5,000. In a behavior experiment, Kahneman and Tversky (1979) find that a majority of the participants chose the risky option (72 percent) over the sure gain, which runs counter to the idea of risk aversion. Overweighting of small probabilities has received empirical support in some previous studies (e.g. Gonzalez and Wu, 1999; Kirby and Santiesteban, 2003; Thaler, 1985). What overweighting of small probabilities suggests for us is that with precise claims, consumers will prefer a small probability promotion to an equally costly 100 percent probability promotion. If people tend to prefer ambiguous outcomes at small probabilities hypothesis holds, a small-probability promotion with imprecise reward should perform better than an equally costly 100 percent probability promotion with precise reward.\nEffect of purchase value and promotion budget: Recently, \"overweighting of small probabilities\" has been challenged as a general rule for people's decision making. Chen and Jia (2005) suggest that \"interpersonal difference\" or \"status quo\" has to be taken into account. For a specific choice problem, (500,000, 0.001; 500) for instance, a wealthier person would be more likely to accept the risky option, since 500 represents less of a sacrifice for that person than it would for a poorer person. Besides the characteristics of consumers, purchase value is another important factor of reference or \"status quo\" deciding consumers' valuation of promotions (Russo, 1977; Samuelson and Zeckhauser, 1988). We propose that in high value purchases, consumers will consider a small value gift less of a sacrifice or even negligible, and consequently prefer winning a big prize at a small probability, whereas in low value purchases, a small value gift with certainty seems to be generous and attractive and consequently consumers are more reluctant to give it up, leading to a higher consumer response. Thus, comparing \"small probability promotion with imprecise reward\" with \"100 percent probability promotion with precise reward\", the effects of overweighting of small probabilities and the ambiguity effect jointly suggest the former to be better for high value purchases. However, in low value purchases, overweighting of small probabilities may not happen, which might counterbalance the ambiguity effect. The above arguments imply the following hypothesis:H2. For high value purchases, a small-probability promotion with imprecise reward performs better than an equally costly 100 percent probability promotion with precise reward, and for low value purchases, the difference is shortened.So far, we analyze the effectiveness of different promotion designs for both high and low value purchases keeping the promotion budget the same. In marketing practice, however, a higher value purchase is generally accompanied by a higher promotion budget. For example, companies use a cup as an incentive to promote an instant coffee but an MP3 to promote a PC. As the promotion budget increases, both the sure gain and the equally costly risky option increase. In this situation, previous studies demonstrate that as the associated payoffs increase, the weights on small probabilities tend to be attenuated. For example, from (5000, 0.001; 5) to (500000, 0.001; 500), people will be more reluctant to take the riskier options because of the increasing sure amount that they would lose (Chen and Jia, 2005; Hogarth and Einhorn, 1990). The above arguments imply the following hypotheses:H3. For low promotion budget, a small-probability promotion with imprecise reward performs better than an equally costly 100 percent probability promotion with precise reward, and for high promotion budget, the difference is shortened.\nStudy 1: the ambiguity effect across different probability regions: Method\nStudy 2: \"small-probability with imprecise reward\" versus \"100 percent probability with precise reward\" and the moderating effect of purchase value: Method\nStudy 3: testing the moderating effect of promotion budget: Method\nConclusion and discussion: Expenditures for probability promotions account for almost 50 percent of total promotion budgets and are growing faster than spending on advertising (Wisniewski, 1986). This growth and relative neglect of research on probabilistic promotion designs has expected academic research to explore systematically. According to prior work by Dhar et al. (1995), for promotions with high winning probabilities (i.e. 100 and 80 percent), the design of explicitly valued reward with certainty is the best strategy, whereas if the promotion campaign involves a relatively low winning probability (i.e. 25 percent), inexplicit expressions of the incentive value receive more favorable consumer responses than explicit expressions. However, the associated probabilities for probabilistic promotional incentives in real world are often very low. Theoretically speaking, decision making at very low or small probabilities is not in line with \"normal\" low probabilities, considering the well-known effect of overweighting of small probabilities. Furthermore, their work cannot provide systematic predictions about which and when of these local \"optimal\" strategies is better than the other. In respond to the need to fill in this gap in probabilistic promotion research, this study examines how consumers respond to precisely versus imprecisely stated promotional reward at varied probabilities, especially at small probabilities, as well as the boundary conditions.Our studies showed that small-probability promotions with inexplicitly valued reward perform better than the same promotion with explicitly valued reward (study 1). In terms of choosing between an explicitly valued sure gift and an inexplicitly valued reward at small probabilities, marketers should consider the product price. For high price products, offering a sure discount (or a sure gift) to everyone will not be as effective as offering a bigger discount (or a larger reward) stated imprecisely to a small proportion of consumers, whereas for low price products, giving out a sure gift with explicit value is a better strategy (study 2). Furthermore, if companies can afford to significantly increase their promotion budget, the advantage of an inexplicitly valued reward over an explicitly valued reward at small probabilities becomes more obvious (study 3). Let us reiterate that promotion budget here refers to how much money marketers plan to feedback to consumers for every purchase, since promotions are always claimed in this manner to consumers. Thus, consumer product companies such as Coca Cola or communication companies such as AT&T usually do have very large promotion budget in total, however might be of small monetary value per purchase. As we can see in reality, they do usually use \"everyone gets a reward\" strategy more often, such as \"giving a small gift\" or \"get 1 dollor off\". However, our findings suggest that when they are willing to increase the promotional budget per purchase significantly, small-probability promotions with imprecise reward perform better, such as winning an inexplicitly valued NBA ticket or a trip to Paris.Designers of promotions should be aware that probability levels and preciseness of the value expression of the reward seem likely to bias consumers' estimates of the probability of winning and the reward value. An upward bias may lead to disappointment and create negative consumer moods and attitudes and a downward bias may cause consumers not to join in the promotion. Thus, how can marketers escape from consumers' misjudgment? First, of course, designers should not exaggerate the winning probability and the value of the prize and should reduce any misleading element in their instructions. More importantly, they should devote more effort in explaining the value, the probability and stressing the random determination of winning. On the other hand, encouraging consumers to make external attributions for not getting the probabilistic reward could avoid negative moods or dissatisfaction, such as writing \"better luck next time\" on a losing sweepstake card.Our study has several limitations which suggest potential areas for future research. First, we use student samples in our laboratory experiments to maximize internal validity. However, students differ from average customers in their risk orientation and cognitive processing of risky promotions and hence future research could expand the principles to broader samples or field settings. Second, the stimuli used in our experiments were similar to other studies that have examined consumer decision making (Dhar et al., 1995; Simonson and Tversky, 1992; Chandon et al., 2000) and limited to three categories (i.e. fast moving consumer goods, home appliances and electronic goods). Future studies should be conducted in broader product categories to provide further support that the results were not due to characteristics of the particular product category itself but rather to the comparison between the values of the product and the expected promotional reward. Third, the present paper examines precise versus imprecise rewards at precisely stated probabilities. In real promotions, the probabilities of winning might be ambiguous also, which would make the options more risky. Future research should confront subjects with both forms of ambiguity and with varying levels of numerical precision. Fourth, in terms of theoretically exploration, a fruitful area for future research is to determine the underlying consumer judgment process that causes these results. For example, it would be interesting to examine the factors influencing consumers' ambiguity seeking and ambiguity avoidance, besides the probability and the promotion budget level.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to try to investigate how consumers respond to precise versus imprecise promotional rewards at varied probabilities, including small ones and whether the effect is contingent on purchase value and promotion budget.\nDesign/methodology/approach: - Based on reviewing the literature on the ambiguity effect, the certainty effect and small probabilities, the hypotheses are tested by three experimental studies.\nFindings: - The experiments show that at a small winning probability, promotion with imprecise reward is more attractive than with precise reward. For high value purchases, a small-probability promotion with imprecise reward performs better than an equally costly 100 percent-probability promotion with precise reward. However, the difference is shortened for low value purchases. Furthermore, as the promotion budget increases, the ambiguity effect becomes stronger. As a result, the small-probability promotion with imprecise reward is preferred to the 100 percent-probability promotion with precise reward at both low and high budget levels.\nResearch limitations/implications: - The present study provides a starting-point for further research on how to design an effective marketing promotion program with precise or imprecise promotional rewards at varied probabilities.\nOriginality/value: - This research extends previous research on tensile versus precise prices to the region of very low or small probabilities, enriches the literature on \"overweighting of small probabilities,\" and also significantly enriches the existing theories on decision making under ambiguity.\n",
"role": "assistant"
}
] |
Scripts creation training
|
[
"Training",
"Experiential learning",
"Performance management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The purpose of this work is to offer the ideas that script-making processes in training and development efforts: may have a very substantial influence on learning; represent processes that embrace several important motivation and personal development features; are knowable, and have steps that directly address motivational issues; and are at the core of several powerful educational and training designs. Some of those designs are briefly reported in this work.In much of the training and education aimed at performance that takes place in businesses, universities, and other organizations we find that there is much activity aimed at the creation, refinement, and practice of scripts or other scenistic approaches for achieving goals and for the continuous improvement of performance. The performance may be individually or group-based. The creation and use of scripts is practiced frequently in education aimed at professional work, for example, teacher-education, law, engineering, and management. These efforts are seldom labeled \"script creation\". Many trainers and educators take the process of script creation for granted, and few attend to the critical dynamics of the process. Many of them do not use the term, \"script\", as the term may seem to convey ambiguous or, perhaps unflattering, meanings.It is important for educators and trainers to gain a better understanding of the place and use of scripts in performance improvement and the personal development of individuals. Knowing more about the process of script creation can help to ground instruction and training design and can stimulate the facilitation of learning. Script creation, if facilitated well, can also serve as a model for self-learning. That is, in the models expressed later in this work, learners have a process available to them that they might use independently in other, similar or related, situations to improve performance.\nBackground: What is a script?\nEvidence of the effectiveness of script creation: Several training and educational approaches demonstrate the effectiveness of script creation processes. Frankly, we do not know of many unsuccessful attempts in the use of the methods as such efforts do not find their way into the literature. We report here, briefly, upon the research to which we have access. Keleman et al. (1993) used script creation and management to demonstrate how group support systems can be made more effective. The primary focus of their research was the implementation and use of group support systems in various arrangements for problem-solving, decision making and so on. They developed an approach that permits a facilitator of a group to enable script creation and script adjustments on-the-fly in real time situations with problem solving groups. The approach enables better use of time and more effective use of information by the group.Lyons (2003, 2004a) used script creation processes extensively in skill development and performance improvement training and education. Script creation was housed within a training design that applied skill-charting activities. Skill charting is a tool that uses the general script creation model explained in this work to help a group of employees or students to focus very intensely on skilled, behavioral performances that attend a particular key result area, such as customer satisfaction.Performance definition and focus are critical elements in the process and once skill attributes and behaviors are identified, in this example (Lyons, 2004a) - customer satisfaction pursuant to some types of transactions with employees, the learners go on to create performance standards for each one. The process is somewhat reflexive and self-reinforcing. In one study (Lyons, 2003), team leaders' performance of specific supervisory and leadership skills with team members was improved from using script creation processes in their training. In another study (Lyons, 2004b) a senior management team making use of script creation processes was able to positively influence a serious employee turnover problem through the skillful creation of behavioral profiles of ideal work associates.Finally, in a recent study (Lyons, 2004c) a training model was developed that made use of hypothetical problem situations (cases, incidents) with script creation activities superimposed on the case analysis work. The resulting approach was named Case-Based Modeling and was used to improve the performance of team members in certain performance areas such as skillfully managing meetings. The approach has broad applicability for training in general supervision, management, and for higher education in business and management. With adaptation, the approach could be used in many different situations and with many different occupational groups.\nConcluding remarks: These studies give evidence that script creation processes are effective tools for training and education. Because they heavily involve learners in the construction of information, continual evaluation of information, and in the application of information into their repertoire of possible responses to issues and problems, the training methods are somewhat labor-intensive. The general process offered in this paper responds very well to the motivational needs of the adult learner. Constructivist and experiential learning activities are especially appealing to adults in training and education work. The learning activities contained within the script creation process also lead to enhanced self-efficacy and self-confidence, as well. These are desirable consequences for any educational or training activity.Script creation processes are, fundamentally, off-the-shelf activities, however, careful facilitation is required for the tools to have substantial influence. Rehearsal on the part of the facilitator is strongly recommended.\n",
"role": "user"
},
{
"content": "Purpose: - To explain some of the dynamics of scripts creation as used in training, to offer a sequence of events to use in creating a performance script, and to offer some examples of how script creation is applied in training activities.\nDesign/methodology/approach: - The paper explains in detail and demonstrates the general processes of script creation for training uses. The design of the paper is to present, in sequence, a generalized script creation process, and evidence of the efficacy of script creation.\nFindings: - While there is not a lot of research available to practitioners about script creation uses in training, the research that does exist gives credible support for script creation applications. There is a body of research in cognition and cognitive processes that tangentially treats script behavior. However, that body of research is highly technical and esoteric and not of practical value to most practitioners.\nPractical implications: - The script creation processes and supporting information give practitioners a considered view of some useful training applications and generally add to the toolbox of trainers and facilitators. A sequence of events is offered that may be tailored for a specific training application. The script creation process, in general, offers a model for trainers to consider in applications such as exploration of new skills, skills improvement, continuous performance improvement and change initiatives, among others.\nOriginality/value: - As far as the author knows, there is no generally available model or template for script creation activities in a training context. This paper helps to address the matter. Of value to practitioners and managers is the explication of a general, malleable model for training activities that is grounded on sound learning and motivation theory.\n",
"role": "assistant"
}
] |
Quantifying demolition fitout waste from Australian office buildings
|
[
"Office buildings",
"Landfill",
"Fitout design",
"Stripout waste",
"Waste composition",
"Waste quantification"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Construction industry consumes more than half of mineral resources of the earth and, concurrently, generates the greatest proportion of waste that goes to landfill (Ajayi et al., 2016). Statistics reveal that landfilling is a ubiquitous issue with construction and demolition (C&D) waste across the globe, including developed countries as evidenced by 29 per cent in the USA, 35 per cent in China, 44 per cent in the UK and 44 per cent in Australia (Ajayi et al., 2016; DEFRA, 2013; Poon et al., 2001b; Wang et al., 2010). A recent study of waste generated in Australia showed an upward trend in generation of C&D waste reaching a record of 831 kg per capita in 2015, while the overall quantity of waste per capita has declined (Pickin and Randell, 2017).\nThe context of office fitout waste: Reasons for demolition are usually due to age and depreciation of building assets, but this is not obviously the case in many office fitout situations. Here, Slaughter (2001) makes the point that buildings require more changes than commonly assumed, and this occurs around dynamics within buildings pertaining to functional, capacity, environmental and people movement issues. This especially applies to office buildings which are not static and are particularly prone to change (Wilkinson and Remoy, 2011; Ryburg, 1996; Choukry, 1992; Douglas, 1996). Modern office buildings typically involve long span floor plates that allow fitout to be a flexible and non-structural part of the building, making it readily replaceable according to changing tenancy or floor space usage requirements. However, what often escapes recognition is that this replace-ability inherently involves cyclic demolition and strip-out, thus causing a sizable waste problem.\nConstruction and demolition waste literature: The waste hierarchy and its ranked levels or reduce, reuse, recycle, recover and lastly, landfill, are now well known across industry. Reasons for diverting waste away from landfill are plentiful and include the lack of landfill space in congested urban environments (Poon et al., 2004b; Wong and Yip, 2004); ground water leaching (Smith and Bishop, 2005; Xing and Hendriks, 2006); problems associated with hazardous substances (Krook et al., 2007), environmental problems (Kartam et al., 2004), disposal costs (Pun et al., 2006), carbon emissions (Pink, 2010) and resource depletion (Moriguchi, 1999; Hendriks et al., 2000; Muller, 2006; Jain et al., 2008).\nIndustry-based initiatives: Despite the relatively small amount of academic research, practical in-roads are still being made at the industry level. Wilmot et al. (2014) undertook an interview-based industry study about strip-out and salvage processes for office building fitout in the Sydney area. They found that four factors were consistently mentioned as influencing the level of resource recovery including time, cost, transport distances and contamination. They also found that strip-out and demolition choices are typically oriented towards the quickest and most familiar options; in many instances, new materials are relatively cheap compared to the effort and cost of reuse and recycling options (including salvage, handling and transport); for recovery to be cost effective, there must be a simple process to sell and recover waste. Complex assemblies and waste streams that are contaminated by multiple materials serve to add cost and prevent reuse and recycling (Wilmot et al., 2014).\nResearch method: In responding to the previous debate, the chosen method of objectively quantifying demolition waste from fitout revolved around measurement of the physical waste taken from individual strip-out projects. Quantification was undertaken on a sample of 23 projects, as detailed in Table I. These projects occurred where tenants were in the process of leaving their rented office space, thus requiring removal of fitout, as per their contractual obligations under the previously mentioned \"make good\" clause. The general scenario involved mid- to high-rise office buildings, with large and open column grids, thus making the fitout non-load bearing. One or more floor levels could be involved in stripout processes, as detailed in Table I.\nFindings - waste quantification from office strip-out projects: The total office strip-out weighs 9,167 tonnes with an overall composition of waste types depicted in Figure 1(a). As can be seen, mixed waste, hard fill and metal are the top three contributors to the fitout waste, respectively, accounting for 41.9, 27.9 and 17.7 per cent of the total waste generated. Other waste types collectively constitute 12.5 per cent of total consisting of plasterboard, workstations/furniture, timber elements, carpet, glass, ceiling tile, asbestos and green and e-waste. The latter three waste types have only been reported from one single project where they were separately treated in particular waste sites to ensure regulatory compliance.\nConclusions: The research identified a gap in the literature concerning a lack of quantitative data measuring the fitout demolition waste streams. The importance of this is based upon the aggregated waste from quick cycle obsolescence of office fitout, especially due to regularly changing tenancies in office buildings, and the amount of waste going to landfill.\n",
"role": "user"
},
{
"content": "Purpose: The short life cycle replacement of fitout in modern high-rise office buildings represents an under-researched waste problem. This paper aims to quantify the amount of demolition waste from office strip-out including attention to waste streams going to landfill, reuse and recycling.\nDesign/methodology/approach: Quantitative waste data (by weight) were measured from 23 office fitout projects situated in \"A\" grade office building stock from the Sydney CBD. Waste streams were measured separately for landfill, reuse and recycled materials. Descriptive and clustering statistics are presented and analysed.\nFindings: From a total of 9,167 tonnes office fitouts demolished, 5,042 tonnes are going to landfill. The main contributor to landfill stream is the mixed waste generated in a fast-track demolition process. This approach partly resulted from the office interiors lacking regularity and easy disassembly. Moreover, considerable variability is observed in the waste per area, the waste streams and the waste compositions. Also, it is noteworthy that the recycled waste stream considerably increases when there exist economically viable conversion facilities, as for metals, hard fills and plasterboards.\nResearch limitations/implications: The research is focused upon work practices that take place in Australia; therefore, generalisability is limited to situations that have similar characteristics. Future studies are needed to verify and extend the findings of this research.\nPractical implications: A key area arising from the research findings is the need to design fitout with recycling and reuse in mind to divert more from landfill. This must explore and incorporate onsite demolition processes to ensure the design is well suited to commercially dominant processes in the overall demolition process, as well as attention to developing economies of scale and viability in re-sale markets for reused items.\nOriginality/value: Little empirical or quantitative research exists in the area of office fitout waste. This research provides entry to this area via quantifiable data that enables comparison, benchmarking and diagnostic ability that can be used to underpin strategic solutions and measurement of improvements.\n",
"role": "assistant"
}
] |
Ethics of property, illegal settlements and the right to subsistence
|
[
"Kyrgyzstan",
"Human rights",
"Poverty",
"Social responsibility",
"Post‐socialism",
"Transition economies",
"Urban development",
"Ethics",
"Property"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: This paper aims to explain how poor and propertyless groups struggle for basic necessities through land invasions, covert practices and illegal sabotage, examining how fundamental rights to subsistence and dignity are superior to private property claims (Witztum, 1997, 2005; Alexander and Penalver, 2009; Singer, 1987; Gray and Gray, 1998; Thompson, 1971). Witztum (1997) argues that Adam Smith in Lectures on Jurisprudence suggests that land is embedded with a moral right to ensure that every person in the community can employ their labour to share in the fruits of the land to subsist, despite attempts by rich elites and government officials to exclude poor individuals from enjoying their due share. We will argue that property can be viewed as propriety, involving reasonable expectations of land use and moral responsibilities towards others in pursuit of human capabilities and a flourishing society (Alexander, 2008; Gray, 1994). We will also offer a critique of the property rights approach to land that depicts property as a commodity with transferable and exchangeable rights that aim to maximise utility, reduce transaction costs, and re-structure society along free market principles (a la De Soto, 1994, 2000; Demsetz, 1967). The paper will contribute towards an interdisciplinary understanding of property that draws upon different literatures, including law, moral philosophy, economics and sociology.\nTheoretical ideas: We will briefly examine de Soto's arguments for titling property, suggesting that his model of property presents a flawed moral vision of property and society. Then we will present our alternative moral responsibilities approach, based upon human capabilities and moral sentiments.De Soto's formalisation of property rights\nResearch design and methods: This paper draws upon two research projects on illegal settlements in Bishkek and everyday politics in Osh, Kyrgyzstan's second city. The first author with Gulmira Churokova, Aigul Murat kyzy and Burul Usmanalieva conducted 27 semi-structured interviews with illegal settlers, lawyers and state officials. We focused on two prominent illegal settlements and a squatted building, of which two are privately owned and in legal dispute over property ownership and access. For our sample, we used call-calling and snowballing techniques, as well as targeting particular persons because of their knowledge and position. The small sample size reflected the methodological aim to gather rich and detailed qualitative data through in-depth interviews. But given the limited sample, our analysis should be seen as illustrative of moral sentiments and judgements of illegal settlers.The interviewees were encouraged to talk about their feelings and understanding of their situation, and their views on residence permit and state action. The interviews were conducted in Russian and Kyrgyz. All the interviews were taperecorded, and the interviewees were reassured about confidentiality. On average, each interview lasted for 45 minutes. The interviews were transcribed, and then translated into English. The data analysis involved reading each interview several times, and then coding specific concepts and themes (such as rights to land, foreigners, residence permit, household survival and state corruption), using NVivo 8 computer software. Codes were based upon recurring and emerging themes.The second author observed local politics in four villages in Southern Kyrgyzstan over eight months, examining how different elites, poor groups and state officials contest water and land rights and social provisions in the community. The research consisted of over 100 semi-structured interviews with different participants, including farmers, school directors and teachers, poor housewives, local officials and international aid workers, of which 87 were recorded and transcribed. The interviews were conducted in Kyrgyz and Russian, and then translated into English. The data analysis involved reading each interview several times, and then coding specific concepts and themes (such as elites, development, resistance, household survival and corruption), using NVivo 8. The interviews probed participants to talk about what problems they faced and the ways they attempted to solve them. The research is generalisable to the extent that it identifies key properties, structures and tendencies, but does not claim that patterns observed in the four villages are likely to be reproduced elsewhere in Kyrgyzstan.\nTwo case studies: In this section, we will examine two case studies from our existing research that highlight how poor and propertyless groups struggle for subsistence and access to valuable resources: illegal settlers' right to housing and land, and poor families' everyday resistance to ensure survival.Illegal settlers' right to human dignity\nConclusion: We have examined how rights to housing, land and valuable resources are shaped by subsistence claims, reasonableness, moral responsibilities and sentiments, which can be distorted by class, \"race\" and power. The moral responsibilities approach to property offers a better understanding of society and property, recognising how humans are vulnerable, needy and interdependent beings, who take care of and are cared by others. Land is an essential resource for human capabilities and flourishing, since without it we would perish. In determining whether poor and propertyless groups have a reasonable claim to land and resources, we must have an impartial and sympathetic understanding of their situation, assessing how lacking the property can undermine human subsistence and dignity.We rejected the normative underpinnings of the property rights approach to land that offers a market-based vision for society and property. We suggested to view property as moral and social propriety, in that property rights are relational, relative and contextual. We also criticised de Soto's policy recommendations to formalise illegal holdings, arguing that squatters' right to housing and land relates to human subsistence and well-being. To legalise squatters' assets as collateral for a loan misses the significant point that society has moral responsibilities towards vulnerable and poor groups. While the state recognises its constitutional duty to provide housing, education and health care, it tends to lean towards another duty to protect individual private rights, aiming to ensure freedom, autonomy and wealth creation. Only through impartial and critical judgements on rights and needs, can the state and the courts arrive what are reasonable expectations of its citizens, though given social and economic inequalities, such judgements are likely to be distorted.There are three implications of the moral responsibilities approach for development, political economy and social theory. First, institutional changes to property (such as land privatisation) must be understood in terms of its social vision for society, suggesting how human capabilities and flourishing can be achieved. While property and land reforms in developing and transition countries point to the moral values of human liberty, autonomy and wealth creation, they tend to ignore what moral responsibilities are owed to vulnerable and needy groups. Given the importance of land and resources for human capabilities, there must be moral limits to land privatisation and a recognition of morally non-excludable resources.Second, political economy requires an understanding of how institutions, rules and habits interact with virtues, character traits and emotions that generate a flourishing society. Neo-Aristotelians (such as Martha Nussbaum, Amartya Sen and Alasdair MacIntyre) and left-wing Smithian scholars (such as Samuel Fleischacker and Andrew Sayer) provide insightful discussions on the dynamics of political economy, suggesting alternative forms of capitalism (such as human capabilities approach). In contrast, traditional models of political economy (such as Marxism, liberalism and institutionalism) offer an inadequate analysis of structure-agency, and of normative and emotional considerations.Finally, social theory needs to engage with moral economy, examining how moral sentiments, judgements and responsibilities shape and are shaped by economic structures, relationships and practices. Ethical considerations deeply move and motivate people to act in their own and others' well-being. Morality and emotions possess objective, contextual and cognitive properties, though individuals can be fallible about them. But there is a tendency to reduce ethics to issues of power, rules or performance. While power and interests are clearly important, we also have to give attention to the lived experience of everyday morality, since moral responsibilities help to constitute social relationships and we cannot escape questions of what obligations we owe to others.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to examine how illegal settlers and poor families struggle for basic necessities through land invasions, covert practices and illegal sabotage, examining how fundamental rights to subsistence and dignity are superior to private property claims.\nDesign/methodology/approach: - The paper combines two qualitative research projects that examined property rights in Kyrgyzstan, conducting semi-structured interviews with poor groups, elites and state officials. One project was conducted between 2009-2010, examining two illegal settlements and a squatted building in the capital Bishkek, and the other project took place between 2007-2008 in four villages in Osh region.\nFindings: - It was found that illegal settlers and poor families deliberate upon the moral aspects of land and property, though sometimes their judgements are distorted by nationalist feelings and racialised identities. Poor and propertyless groups struggle for basic necessities, lacking access to social rights and facing class contempt and state coercion.\nResearch limitations/implications: - The authors criticise de Soto's ideas on legalising squatters' holdings, suggesting that his property rights approach to land offers a flawed moral vision for society and a mis-understanding of illegal settlements.\nPractical implications: - International donors need to re-think development strategies for increasing growth and reducing poverty, and for Kyrgyzstan to abandon the national residential registration system (propiska).\nOriginality/value: - The authors' moral responsibilities approach on property recognises the importance of land and valuable resources for human capabilities, the competing obligations of the state and the role of moral propriety and sentiments in shaping responsibilities towards vulnerable and poor groups.\n",
"role": "assistant"
}
] |
Self-censorship on large corporations in SNS: the effect of news exposure, knowledge, and perceived power
|
[
"Knowledge",
"Corporate power",
"SNS",
"Chilling effect",
"Media exposure",
"Self-censorship"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The phenomenon of self-censorship in speech, also known as the chilling effect, has long been studied in constitutional research, political science and communication studies. In constitutional studies, the chilling effect refers to self-censorship of an individual's speech for fear of reprisal from state authorities if/when their identities are revealed (Schauer, 1978; Sidhu, 2007). However, a chilling effect can occur not only from the fear of political authorities, but also from the fear of influential organizations in the economic and social sectors (Byeon and Chung, 2012; Sidhu, 2007; Solomon and Samp, 1998). Particularly in a capitalist society, large corporations with extensive economic power have expanded their influence to social and political areas (Fuchs, 2005; Herman, 1981). In South Korea, even the term, chaebol, referring to large business conglomerates such as Samsung, exists. Perceived power is one of the key elements that cause people to self-censor their expression or behavior (Detert and Edmondson, 2011; Foucault, 1982). If people are aware of the power of large corporations, they may avoid talking about certain ideas in the public sphere. An example of the chilling effect that is relevant to large corporations was reported in a survey in South Korea (Hwang, 2012). In the survey, 57 per cent of 1,366 job seekers reported that they deleted some of their postings on their social networking site (SNS) accounts just before they applied for a job with a large corporation. Moreover, 52 per cent of 397 HR managers reported that they monitored postings and comments of applicants' SNS during the evaluation processes. Although scholars have raised ethical concerns about employers using SNS in the hiring process (Brown and Vaughn, 2011; Clark and Roberts, 2010), no study has examined the mechanism behind the self-censorship related to large corporations.\nMethod: To test the proposed hypotheses, the present study conducted a survey of South Korean citizens who used SNS (e.g. Twitter, Facebook, Kakao Talk and LINE). Only those who used their real names and/or believed that other people could recognize their identities in the SNS space were included in the analysis[1].\nMeasures of variables: Self-censorship regarding large corporations\nResults: H1 predicts that the degree of self-censorship related to large corporations is greater than those of small and medium corporations. To test this hypothesis, a repeated ANOVA was conducted, in which self-censorship was used as a repeated measure variable and business type (large vs small and medium corporations) was used as an independent variable. Socio-demographic variables (i.e. age, gender, job, education level and SNS usage) were controlled in the analysis. The results of the analysis showed that self-censorship related to large corporations (M = 4.42, SD = 1.35) was greater than self-censorship related to small and medium corporations (M = 3.93, SD = 1.32). This difference was statistically significant: F(1, 444) = 10.60, p = 0.001 and partial e2 = 0.02[4]. Thus, H1 was supported.\nDiscussion: The present study explored the psychological mechanisms of citizens' self-censorship regarding large corporations. This study viewed self-censorship as a result of the chilling effect caused by a powerful agent. Specifically, individuals decide not to speak or write about something because they are concerned about socio-economic disadvantage or physical/psychological harm. These constricting behaviors are mainly caused by the surveillance of powerful people, agents, institutions and social systems (Foucault, 1980; Sidhu, 2007). Also, self-censorship is common in mediated contexts including SNS space (Das and Kramer, 2013). Chilling effects or self-censorship have been studied in terms of power distribution and political and ethical implications (Clark and Roberts, 2010; Detert and Edmondson, 2011; Sidhu, 2007). This study, however, focused on a different power agent, i.e. large corporations, because the influence of large corporations on society is becoming increasingly powerful and ubiquitous. Furthermore, corporations monitor and screen their employees' as well as applicants' SNS (Hwang, 2012; Sameen and Cornelius, 2015). Thus, we investigated the degree of self-censorship regarding large corporations in SNS and its relationship with the perceived power of large corporations. In addition, this study examined the overall relationships between media exposure about, knowledge about, perceived power of and self-censorship related to large corporations. Analyses of 455 Korean adults revealed that self-censorship related to large corporations was stronger than self-censorship related to smaller corporations. Also, corporation-related news exposure increased knowledge about large corporations, and an increase in the amount of knowledge about large corporations led to an increase in the perceived power of large corporations. Consequently, the perceived power of large corporations increased the degree of self-censorship related to corporations within the SNS space. Specific mediation effects were also addressed.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to investigate whether citizens censor their own expressions regarding large corporations in social networking sites (SNS) and how self-censorship is associated with the perceived power of, knowledge about and media exposure about large corporations.\nDesign/methodology/approach: A nationwide survey was conducted in South Korea (N = 455). The data were analyzed with structural equation modeling.\nFindings: As exposure to news about large corporations increased, the degree of self-censorship regarding large corporations increased. This effect of media exposure on self-censorship was mediated by the amount of knowledge about large corporations and the perceived power of large corporations.\nResearch limitations/implications: Although this study focused on the SNS context, the results of this study cannot provide the features of the self-censorship process that are distinct in SNS compared to other contexts. Although a causal model was provided based on theoretical reasoning, the nature of the data is correlational. Thus, one should be cautious when interpreting the results.\nPractical implications: The findings suggest that, while establishing privacy protection policies with regard to the SNS, policy makers need to consider how to prevent invasion of privacy and misuse of personal data by large corporations, interest groups and the unspecified public.\nOriginality/value: This study extends the literature related to self-censorship by identifying the effects of economic power and the psychological factors involved in self-censorship.\n",
"role": "assistant"
}
] |
The long and winding road: Problems in developing capabilities in an undergraduate commerce degree
|
[
"Assessment",
"Assurance of learning",
"Business",
"Undergraduate",
"Commerce",
"Outcomes assessment"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Hearing Paul McCartney sing The Long and Winding Road is reminiscent of the long journey of a student in an undergraduate commerce program with all its challenges and milestones. The development of a set of learning outcomes as part of a program is what characterises most programs in Australia. A graduate is \"promised\" a combination of discipline- and behavior-specific outcomes found in most university handbooks. The commerce degree is no different. Across Australia, the focus has been on what students will likely achieve at the completion of their study, if handbooks are to examine.\nMethods: The collected data from assurance of learning activities within the faculty of business for the bachelor of commerce comprised the data for the study. Relevant ethics approval has been granted for using such data. The data mainly composed of rubrics developed for the assurance of learning exercise and the \"reflections\" or planned actions that the lecturers have submitted in response to the results of the assurance of learning process.\nResults: Learning outcome 1: applying mathematical and statistical tools of analysis\nDiscussion: There is a good variety of students, subjects and year levels assessed in this study. This shows the extent to which commerce students engage with the range of learning experiences in the bachelor of commerce program. To summarise, there are a number of specific challenges observed in this study (Table VIII).\nConclusion: Many business-related programs include in them \"promises\" that deliver the best possible learning experience to student. The undergraduate commerce program is no different. To examine if these programs do deliver, it is important to conduct ongoing reviews. One of the many ways to assess such programs is to look into the range of skills that are promoted, develop instruments to assess them and reflect on the results. This paper has done that. It examined some skills needed for a commerce graduate to develop such as applying mathematical tools, demonstrating knowledge of theories, demonstrating knowledge of key concepts, and analysing organisational problems and generating solutions. Although the results are promising for current students of this specific faculty, it also shows that there are specific skills deficits. From the analysis, one can learn that this research has not only shown that there are problems in broad skills areas in business but, more importantly, it showed where those problems specifically lie. By targeting specific skills areas, curriculum development can have a much more direct approach to examining a number of learning outcomes. The long road could be less winding and more forgiving for some students who would benefit from such reviews. For example, specific lecturers, subject coordinators, and program managers can map their curriculum based on the kinds of skills emphasised in their subjects. They can then look at the teaching and learning activities, delivery methods and assessments where there could be possible gaps in developing the desired skills. At this level of detail, they can swiftly make changes to ensure that the subjects contribute to the overall achievement of program aims.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to provide an analysis of specific learning outcomes in an undergraduate commerce degree in a large research-intensive university in Australia.\nDesign/methodology/approach: It uses data collected from assurance of learning activities as part of Association to Advance Collegiate Schools of Business accreditation. A total of 267 assessments were marked using nine different rubrics in nine different subjects. It assessed six learning outcomes.\nFindings: Results indicate that a number of skills deficits exist among commerce students' application of mathematical tools, analysis of business issues or problems, demonstration and application of theories, models or concepts, describing alternative methods of analysis, and generating supported conclusions or solutions, and analysing the research of others. These findings tell us that there are a number of higher order thinking skills that students need to develop when they pursue a commerce degree. It also creates a challenge for universities to foster an environment where these skills are developed in the curriculum.\nPractical implications: The wider implications to higher education include a reconsideration of the purpose of the commerce degree, the importance of feedback from various stakeholders (e.g. alumni, employers) to inform the commerce curriculum, and the range of learning experiences that develop these skills. More importantly, this study has identified specific skills deficits across the broad generic skills embedded in the commerce degree. It can assist academic staff and program managers in planning for future curriculum improvements as they see fit in the context of their own commerce programs.\nOriginality/value: This is a novel contribution in that it provides specific assessment of skills deficits in business undergraduate education.\n",
"role": "assistant"
}
] |
Sex, benevolence and willingness to pay for screening
|
[
"Gender",
"Self-interest",
"Willingness to pay",
"Contingent valuation",
"Screening",
"Cancer",
"Sex",
"Benevolence"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Individuals are motivated by self-interest, but not exclusively. In addition to the prospect of purely personal gain, behaviour can be inspired by benevolence and a concern for the welfare of others. This is evidenced by a willingness on the part of many people to donate money to charities, to engage in voluntary work and to provide support for dependent family members. Both men and women participate in such activities, prompting the question as to whether one of the sexes might be more benevolent, or less self-interested, than the other.\nMethod: To minimise the risk of bias we determined that the scenario should comprise interventions which were, as far as was possible, similar in purpose and process, as well as being familiar to both sexes. We accordingly selected three forms of screening for cancer. In England, where this study was conducted, screening for bowel cancer is available to asymptomatic men and women at zero-price via the country's National Health Service (NHS). Both screening for breast cancer and screening for prostate cancer are available on similar terms, although for women only and for men only, respectively. An additional similarity is that the risk of each of these cancers is insubstantial amongst those aged 50 years or below, and screening is not usually recommended for that age-group.\nResults: Out of a total of 6,939 survey packages distributed to the general practices, 1,787 questionnaires in which subjects had completed the PTO task, the CV task or both were returned. The response rate (25.8 per cent) was similar to that achieved in a previous local study (Philips et al., 2005). In 207 (11.6 per cent) of the responses, a preference for screening type had been indicated yet no CVs had been recorded, i.e. all three values were missing. These questionnaires were excluded from the subsequent analyses. The study sample thus comprised only those responses in which at least one CV was offered (n=1,580). Of these, 55.5 per cent were from women.\nDiscussion: In the PTO task, more than half of respondents selected a service (breast or prostate screening) which would have excluded one particular sex. At the same time, over two-thirds offered identical CVs for all three services, thereby effectively electing not to discriminate. These people were evidently willing to make the same contribution to a service from which they could not benefit as to one from which they could. Previous CV studies have also reported a high proportion of identical multiple values in the presence of strong preferences over alternatives and have questioned the consistency of such findings (Olsen et al., 2005). However, in the present study as in the earlier ones, the results surely arose because the PTO and CV tasks were qualitatively different. On the one hand, the CV task allowed individuals to express different values for each service if they so wished but did not oblige them to do so. On the other hand, the PTO task made an explicit choice unavoidable.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to report the findings of a contingent valuation (CV) survey of health care services, designed to illuminate self-interest and benevolence on the part of one sex for the other.\nDesign/methodology/approach: In a constructed scenario, men and women recorded how much they would be willing to contribute to each of three different types of cancer screening, one of which would be available only to members of the opposite sex.\nFindings: Over two-thirds of individuals, amongst whom men were more heavily represented, chose an identical CV for all three services. Amongst those who nominated dissimilar values, a willingness to contribute to own-sex screening coupled with an unwillingness to contribute to opposite-sex screening was more common amongst women than amongst men. Both sexes valued own-sex screening more highly than opposite-sex screening yet, compared with men, women were prepared to offer proportionately less for the latter relative to the former. In an associated person trade-off task, women were considerably less likely than men to choose opposite-sex screening at the expense of a type from which they could benefit personally.\nOriginality/value: To date, very little research has been undertaken on differential responses to health valuation of care provision by sex. The results suggest a degree of asymmetry between the sexes, with respect to self-interest and benevolence.\n",
"role": "assistant"
}
] |
Pissing down your leg
|
[
"Economics",
"Inflation"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nPissing down your leg: Did you ever think that making a speech on economicsis a lot like pissing down your leg? It seems hot to you,but it never does to anyone else (Lyndon Baines Johnson, 36th President of the United States). No president makes me laugh like you didBut I won't deceive youI never felt your scalding enthusiasmsMe a mere foot soldier from the dismal sciencesOne who never went \"all the way with LBJ\"Maybe half-way at best Oh I warmed to \"The Great Society\"Watched it immolate in DetroitGot fired-up by your \"War on Poverty\"While you napalmed the poorest in their paddy fieldsYou spent big taxed smallYour inflation outlived you by a decade But you certainly had hot stuff to talk aboutBig issues big-time economicsYet still you reckon you couldn't warm 'em up Think of me Down-Under spouting economicsSmall-time theoretical compulsoryYour conscripts numbered amongst my captivesAccursed winners in the birthday rafflesExpendable pawns in our grand game of ToadyStudying to defer graduations in khaki So LBJ I'm not horsing-about when I sayThose steamy excitations of yours evaded meBut so too did that clammy shameful aftermathThough there was the odd damp patchBefitting a non-marcher with his safe birth-dateBarely a tepid dribble in a brimming common pot\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is a reflection on how the aridity of economic speeches, lectures and presentations usually conceals an underlying human drama, often a tragedy.\nDesign/methodology/approach: - A fictional poem presents the reflections.\nFindings: - Meditation on how economic policies cannot be divorced from their social context despite their often bland and \"distancing\" forms of expression.\nResearch limitations/implications: - The paper stimulates thought on the economic and social consequences of decisions, including those individuals with whom we are in direct contact.\nOriginality/value: - The poem is an individual comment on the \"realities\" behind economic decisions and how ignorance or considerations of personal safety constitute weak moral defences.\n",
"role": "assistant"
}
] |
Does DTC mean "direct to court"?
|
[
"Advertising",
"Medical prescriptions",
"Laws and legislation"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this issue.\nIntroduction: Direct-to-consumer (DTC) advertising is the subject of intense debate. Proponents argue that it can educate consumers about new treatments, increase treatment for under-diagnosed conditions, and help patients make better-informed health care decisions. Opponents contend that it could interfere with the physician-patient relationship, raise health care costs, and increase consumption of new, more costly products over older, cheaper, and safer alternatives. Opponents also argue that DTC ads oversimplify complex issues and may confuse consumers who lack specialized medical knowledge (Kaphingst and DeJong, 2004).This article investigates the changing legal landscape associated with the growth of DTC advertising of prescription drugs, and offers some suggestions for ways in which pharmaceutical companies can avoid liability arising from their DTC advertisements. We limit our discussion to prescription drugs, and do not address the sale of over-the-counter drugs. In the next sections, we discuss the success of DTC, and the views of proponents and opponents of DTC, and then explain the current state of the law and regulations governing a pharmaceutical company's advertising of prescription drugs. We conclude the paper by offering recommendations which we hope will provide direction toward a harmonious co-existence between DTC and the laws and regulations governing it, and toward a marketing paradigm which recognizes its customers as \"patients\" first, and \"consumers\" second.\nThe success of direct-to-consumer advertising: For decades, prescription drug makers promoted their products exclusively to health-care professionals, who were expected to interpret drug information for their patients. But about 15 years ago, partly because of the increase in the number of patients making their own health-care decisions, some manufacturers began to produce ads targeted to consumers. Since then, DTC advertising has become a popular promotional tool (Mehta and Purvis, 2003).The phenomenal growth of DTC advertising is a result of a change in Food and Drug Administration (FDA) regulations in 1997, which have made it easier for pharmaceutical companies to advertise prescription drugs, especially on television. These regulations include permitting advertisers more flexibility in providing information about the advertised drug. For example, the guidelines allowed advertisers to include both the name of the drug and the health condition it treats, along with a statement of major risks and side effects, and a reference to a source of further information such as a toll-free number or web site without requiring a complete statement of side effects and contraindications (Brumback, 1999). This made television broadcast ads possible (Mehta and Purvis, 2003).Consumer awareness of direct-to-consumer (DTC) advertising has climbed from 75 percent in 2002, to an all-time high 81 percent in 2003, according to data from healthcare and pharmaceutical marketing research firm Market Measures/Cozint. In all, this widespread awareness extends to print and broadcast DTC ads across all 25 disease states tracked by the firm's annual DTC Monitor. Based on input from more than 6,000 consumers, the study found that sufferers are not only recalling DTC ads for their conditions, but are also increasingly acting on the ads (Gatti, 2003).In particular, doctor contact rates - the percent of consumers who call or visit their physicians as a result of seeing a DTC ad - have risen for the second consecutive year to 23 percent. Moreover, among those patients contacting their doctors, 47 percent specifically request the brand they see advertised (up from 41 percent), while 74 percent of physicians actually comply with these requests (up from 71 percent). Furthermore, 58 percent of DTC-driven discussions lead physicians to write a prescription or provide a sample for the advertised brand - whether or not the patient ever directly asks for the product (Gatti, 2003).\"Clearly, DTC is a powerful tool, for driving both awareness and action\", said Sue Ramspacher, senior vice president of consumer services for Market Measures/Cozint.This is particularly true for magazine campaigns, which generate both higher doctor contact rates and higher prescribing rates than television ads. In spite of magazines' superior performance, television continues to be the medium of choice for DTC advertisers, due mostly to its broader reach. The gap is narrowing, however, as marketers experience the impressive results that print can deliver (Gatti, 2003).\nProponents and opponents of DTC: Proponents of DTC prescription drug advertising believe that the knowledge and information function of such advertising has enormous benefits. It is claimed that DTC advertising \"is an excellent way to meet the growing demand for medical information, empowering consumers by educating them about health conditions and possible treatments\" (Holmer, 1999). DTC advertising helps to educate consumers about the choices available in treatment, as well as provide information about various health conditions, which may not be widely known or easily recognized by patients. In some cases, patients may not even be aware that treatment exists. Armed with knowledge from DTC advertising, proponents say, consumers may discuss their treatment options with their doctors and be better qualified to help manage their own health care. Patient compliance may also be improved as a result of this interaction. With an aging population and the cultural trend toward increasingly easy access of medical information, DTC advertising is seen by many as appropriate, relevant, and beneficial (Mehta and Purvis, 2003).Opponents, on the other hand, believe that DTC advertising, even when truthful, may not always be completely read, so that consumers will not fully comprehend the side effects and risks associated with a drug's use. Further, some feel the increased patient involvement in the prescription development process may have unfavorable medical consequences since physicians may be persuaded, even pressured, by patients to prescribe the requested medication even when it is not the first choice of the physician. Many believe that, in fact, the \"principal effect of DTC marketing is to create consumer demand, changing the physician-patient relationship to a physician-consumer relationship\" (Hollon, 1999). It is claimed that pharmaceutical manufacturers are using enlightened \"health-care consumers\" to market their drugs through health-care providers (Hoffman, 1993; Mehta and Purvis, 2003).There is concern whether inappropriate prescribing is leading to people obtaining drugs they should not be getting. \"The problem with DTC advertisements for prescription drugs is that patients think that drugs are right for them without knowing the full story of their condition,\" says John Bertolini, Director of DDB Remedy. \"I believe it puts extra pressure on doctors and possibly inappropriate prescribing.\" Not the least of all, DTC advertising's impact on costs of medications is also a serious consideration (Sudhaman, 2004).However, these conflicting views are only part of the picture. What has changed, perhaps more than anything else, is the way in which the potential buyer of a prescription drug has come to be viewed as a \"consumer\" rather than as a \"patient.\" DTC advertising is responsible for that change, and because of that, the laws and regulations governing DTC advertising are changing.\nLaws and regulations governing the sale of prescription drugs and medical devices: Drugs are dangerous products\nRecommendations - how to prevent DTC from coming to mean \"direct to court\": It is very easy to see the dilemma faced by the marketing professional, who must find a way to promote a product while disclosing certain information which is contra-promotional, all within certain parameters and, in the case of broadcast media, also within strict time-frames. Yet, this is the task facing today's professional marketer of prescription drugs. How can this be done in a way which protects the patient, and does not jeopardize the future of DTC advertising?First, many of the criticisms of DTC advertising appear to be well-deserved, and would stand as the primary reasons to abolish DTC advertising, or severely curtail it. Those practices which have received universal criticism should be changed first. And this is not to say that all DTC ads are suspect. Many firms do a creditable job and a public service with DTC advertising, but some do not. If the industry does not regulate itself, the government will surely do it.Second, DTC advertising of prescription drugs has the potential to provide many benefits - truly educating the public, making the public aware of under-diagnosed and under-treated illnesses such as high blood pressure and high cholesterol, and helping to remove the stigma associated with certain disorders such as erectile dysfunction and depression (Rados, 2004). But that potential is not being realized. So much opportunity exists to adopt and accomplish these goals, and improve the perception of direct-to-consumer advertising, and the pharmaceutical manufacturers, but that opportunity is being wasted.If the perception of DTC advertising continues to worsen, you may be sure that new laws and regulations will follow. Even now, the Learned Intermediary Rule can no longer be relied on to shield drug manufacturers from liability for their products. Because the health care system has changed so dramatically, and because DTC advertising has essentially filled the void created by those changes, DTC advertising is at risk. Can it fulfill its promise?The authors offer these recommendations:1. Always remember that your customer is a \"patient\" before he is a \"consumer.\" If you approach advertising with this in mind, you will build credibility - your most important asset in any promotional endeavor. At the same time, you will be protecting your customer, and avoiding liability.2. Provide full disclosure of relevant information, particularly about the risks of your products, directly to the patient-consumer. Remember that any patient needs enough information about your product to make an \"informed consent.\" She must know the risks, as well as the rewards. If you do this, you will protect your patient-consumer, and avoid liability.3. Make it a primary goal to educate your patient-consumer (Thomaselli, 2005), rather than to merely sell to him. It is a change of perspective which will have many benefits.4. Make sure your drugs come to market only after full testing. Avoid the kind of disaster brought about by Vioxx and Celebrex, both of which leave their manufacturers facing expensive litigation. Adequate pre-testing to make sure the product was safe would have been less expensive in many ways, not the least of which is the appearance to the public of the rush to market of an unsafe drug in order to make huge profits.5. Comply with FDA regulations and guidances in all respects; give no cause for further regulation.6. Comply with all state laws in all respects; avoid losing what protection remains of the Learned Intermediary Rule.7. Establish and safeguard your company's credibility:*Avoid the kinds of activities that have led to criticism by doctors, patients and consumer groups. For example, don't encourage overuse of prescription drugs. Liquor ads tell viewers to \"drink responsibly.\" Why not include in your ads some cautionary advice like \"As with all medication, take it only if you need it,\" or something similar. You will be doing a public service.*If you promote new or costlier treatment, advise the patient that other treatment may be available. And tell him that his doctor will be able to advise which will be best for him.*Make sure ads are fairly balanced between benefit and risk information, and do it in a way that calls attention to the risk information (Kaphingst and DeJong, 2004).1. Bring the physician back into the decision-making process. Make sure your patient-consumer knows that individual health factors may affect a decision to use any medication, and that only her doctor can make this assessment.2. Direct at least some product ads to the physician or health-care provider. You will be building credibility with both doctor and patient.3. Be mindful of the patient who does not read well. When preparing \"adequate provision\" information, be sure that everyone knows where to get more information - even those who do not read well (Kaphingst and DeJong, 2004).4. Compose your advertising at the recommended level for adults, an eighth grade reading level, and not to the college level. Use sub-titles, bullet points and key ideas to make your ads both more readable and more understandable (Kaphingst and DeJong, 2004) (see samples from FDA Draft Guidance for print ads, included at the end of this paper; Samples may be downloaded from www.fda.gov).5. Make sure that any admonition to \"tell your doctor,\" includes the reason why - because this medicine may interact with others. You will be protecting your patient-consumer, and avoiding potential liability (Kaphingst and DeJong, 2004).6. Your \"brief summary\" should be just that - brief, written in laymen's terms, relaying key ideas, and understandable at an eighth grade reading level (Kaphingst and DeJong, 2004). Be certain to provide a way for your patient to get access to full information. Do not default to copying the FDA labeling requirement. Your patient-consumers cannot understand it. This increases your risk of liability.7. Promote medically necessary drugs at least as much as \"lifestyle\" drugs such as Viagara, Rogaine and Botox. Make sure your priorities are correct. Your credibility depends on it.8. Avoid creating a disease just to be able to sell a drug. In the eyes of many observers, this is pharmaceutical marketing at its worst, and cannot be justified.9. Do more than is required by the laws and regulations. Your most important asset is your firm's credibility. Establish it; maintain it. A company which acts ethically will avoid liability. An industry which acts ethically will avoid new laws and regulations which are almost certain to reduce its options for conducting business, and increase its costs.10. Create more \"help-seeking\" ads. Provide the public service of raising awareness of undiagnosed and under-treated illnesses. With medical advice, include in your ads symptoms (Kaphingst and DeJong, 2004) the viewer should watch for, and if he finds them, tell him to see his physician to determine if what he thinks may be true actually is true. Educate your patient-consumer; establish your credibility.The goal is do all of this, and still sell your product. That is undoubtedly a challenge. The professional marketer must combine marketing skills with compliance dictates in a way that is both appealing and understandable, and conveys all information necessary to any patient-consumer's informed decision. This will require a new kind of creative thinking.Compliance with creativity\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to investigate the changing legal landscape associated with the growth of advertising of prescription drugs directly to the consumer, and makes recommendations designed to assist advertisers in avoiding legal liability based on those advertisements.\nDesign/methodology/approach: - This study investigates the phenomenal growth of DTC advertising since 1997, when a profound change in the FDA regulations took effect. These changes permitted advertisers significantly more flexibility in providing information about the advertised drug directly to the consumer. Since then, however, DTC advertising has repeatedly come under attack. A review of the literature, changing law, and other factors, reveals the primary criticisms of DTC advertising, and its tendency to expose pharmaceutical advertisers to legal liability.\nFindings: - The paper recounts the development of the law concerning pharmaceutical advertising, and particularly, the application of the Learned Intermediary Rule. Previously, this Rule operated to shield pharmaceutical companies for liability by passing liability on to the physician who wrote the prescription for the drug. Now, that law is changing, with resulting liability for pharmaceutical advertisers.\nPractical implications: - The study recounts the primary criticisms of DTC advertising, and provides a number of steps that can be taken to help avoid legal liability for pharmaceutical companies that engage in DTC advertising.\nOriginality/value: - The study looks at DTC advertising from both a marketing and a legal perspective, and combines those disciplines to draw conclusions helpful to DTC advertisers.\n",
"role": "assistant"
}
] |
"In spite of everything?" Female entrepreneurship from a historical perspective: The Italian businesswoman Luisa Spagnoli (1877-1935)
|
[
"Gender",
"Italy",
"Entrepreneurship",
"20th century",
"Businesswoman",
"Luisa Spagnoli"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In recent years, the literature on female entrepreneurship has developed in tandem with studies examining small and family-owned businesses. These works have determined the significance of female involvement as self-employed entrepreneurs, as shareholders and managers (Mandell, 2014). The analysis of the history of female entrepreneurs has in turn raised methodological issues, as it is not sufficient to simply report the presence of women in entrepreneurial situations or describe such women as somehow exceptional. The phenomenon must also be contextualized through a consideration of market transformation, to determine if and how gender can modify the meaning and the structure of business (Menzani, 2010; Mandell, 2014). New conceptions of women in business have thus been accompanied by the study of the wider context and circumstances in which female entrepreneurial experiences appeared (Doe, 2017; Vickery, 1993). As \"business has always been gendered\" (Gamber, 1998, p. 216), it follows that the number and shape of past female enterprises are the direct consequences of the institutional, social and cultural environment. In most historical epochs, these facts have typically prevented women's independence from male figures.\n2. Literature review: Historical perspective is just one aspect of the study of female entrepreneurship, and it was only at the beginning of the 1990s that Lehman (1992) identified the analysis of the roles and functions of women over time in businesses and professions with the term \"herstory\". Indeed, accountancy and business have long appeared in Western culture as male practices, constructed around a male conception of the economy (Cooper, 2001; Shearer and Arrington, 1993; Virtanen, 2009). The presumption was that upper-level positions are more complex and therefore only suitable for men (Spruill and Wotton, 1995; Welsh, 1992). Only large corporations could be the subject of business history (Gamber, 1998; Scranton, 1998). This reduced women's role in business and presence in the related literature. In the past 20 years, publications have started to appear that focus on women in accounting, finance and business history. Most papers on women in accounting history have focused primarily on the development of the accounting profession (Kirkham and Loft, 1993; Wootton and Spruill, 1994; Spruill and Wootton, 1995; Hammond, 2003; Ikin et al., 2012) and its role in enhancing job opportunities and the conquest of economic independence and political power (Komori, 2007). Other scholars have focused on the effect of institutional context in developing roles for women in accounting (Emery et al., 2002) or on the use of accounting in women's private lives (Walker, 1998; Virtanen, 2009).\n3. Research focus and methodology: This study is based on the historical analysis of collected data and sources. As Previts et al. (1990a) and Carnegie and Napier (1996) have noted, this reflects the tradition of the interpretive historical method, which offers the possibility to criticize the subject under study (Safari and Parker, 2017). The analysis was conducted using an archival method (Bucheli and Wadhwani, 2014; Decker, 2013; Toms and Wilson, 2017). The majority of information has been collected from archives, and primary and secondary sources were identified. With a multi-level approach to the study of entrepreneurship (Smothers et al., 2014), I have analysed the life of Luisa Spagnoli using a synchronic approach to its events (Servalli, 2007). This examination emphasizes the links among the personal, cultural, social and institutional domains (Frogget and Chamberlayne, 2004). The analysis of the context also adopts a historicist approach (Budd, 2008; Tosh and Lang, 2006), which suggests that every treatment accounts for the structures, culture, events and practices of the period under examination. The main primary-source data were collected in consultation with the Buitoni-Perugina company archive (Archivio Industrie Buitoni Perugina - IBP) located at the Nestle Italiana s.p.a. confectionery division in Perugia, near the Perugina Chocolate Museum.\n4. The position of women in Italy in the early 1900s: a brief analysis: The economic and social conditions of women at the beginning of the twentieth century in Italy were dramatically different from and unequal to those of men (Valsangiacomo and Lorenzetti, 2010). The gap between women's value and their social recognition was part of a national identity that historically excluded all Italian women from institutional and official valorization (Palazzi, 2002; Curli, 2005). This is primarily evident in the national legal structure. According to the Italian Family Code (1865), for example, mothers had no right to their children, and women in general were banned from holding public office. Article 486 of the Italian Penal Code punished adulterous women with a prison sentence of three months to two years, while a man could only be condemned in cases of concubinage (Minesso, 2015). Furthermore, all married women were under \"marital authorization\", which was only abrogated in 1919. This obligation precluded any married woman from running commercial and entrepreneurial activities without the permission of her husband, as:\n5. The creation of the sweetest \"kiss\": Spagnoli was born in Perugia, in modest circumstances, on 30 October 1877. As soon as she came of age, she married Annibale Spagnoli, from Assisi, and together they opened a small workshop in Perugia to produce dragees (Corvisieri, 2001). Their experience and success in this initiative and, most importantly, their capacity as entrepreneurs led to the establishment of the historic Perugina company, with three other business partners. Francesco Buitoni (the owner of the Buitoni pasta firm) wanted to create a structured company for the production of chocolates and sweets and decided to involve the Spagnolis in the project. While Buitoni did not require initial financial capital from them, he did need their knowledge and their workshop. On 30 November 1907, Leone Ascoli, Annibale Spagnoli and Francesco Buitoni established the Societa perugina per la fabbricazione dei confetti (Confectionary Production Company of Perugia) (Ceserani and Covino, 1997). Business began in January 1908 in a building in the centre of Perugia. Francesco Buitoni was the manager and administrative director, while Annibale Spagnoli, owing to his previous experience, took on the role of technical director, assisted by his wife.\n6. From rabbits to clothes: Luisa's initiative was not limited to the Perugina company. In the second decade of the twentieth century, she founded Luisa Spagnoli, which remains an Italian fashion company with global reach. She was a curious person. Her garden was populated with animals (including several breeds of chicken, pigs, nutrias, Karakul sheep, Toggenburg goats and exotic birds). The most interesting of these were the Angora rabbits. At the beginning of 1920, these animals were little known in Italy. Mario Spagnoli wrote that, at the time, there were few \"groups of Angora rabbits, and they were bred mostly for curiosity\" (Spagnoli, 1946, p. 8). Luisa decided to raise these rabbits to use their wool in the production of clothes, as she had seen done in Great Britain. She imported some specimen couples from there (as well as from France and Germany) to create an Italian Angora Rabbit, through \"patient crossbreeding\" (Spagnoli, 1946, p. 29). In her research, she found that wool collection techniques varied: in France, it was collected by periodically combing the rabbits, while in Britain, clipping was preferred. Luisa chose the combing process, and this was an important element in producing high-quality wool (Corvisieri, 2001). Moreover, Luisa discovered, after several attempts, that Angora wool got softer after dyeing, and capitalizing on the work of expert local spinners, she finally began to develop her new entrepreneurial initiative (Boneschi, 2002). The initial idea was to combine this knitwear with Perugina Easter eggs, but very soon production developed beyond this, owing in part to Mussolini's autarchy regime, which prevented the import of foreign goods and raw materials. At first, the Angora manufacture was artisanal, and the products were created with textile looms at worker's homes, in rooms at Perugina, or directly at the Spagnoli house. It is difficult to ascertain when the first products were sold, but the line was certainly launched by 1932, a year that saw the presentation of Angora wool garments in several boutiques in the city (Corvisieri, 2001).\n7. \"In spite of everything\": the entrepreneurship of Luisa Spagnoli: The previous sections not only outline the picture of \"Signora Luisa\" as a strong, resolute and unconventional woman, but also highlight her fundamental role in the development of Perugina and the creation of the Luisa Spagnoli clothing company. She continuously sought out new recipes and new mixes of ingredients for chocolates, as well as new products in general, managing the businesses despite her hidden role. In particular, her entrepreneurial contribution can be appreciated in terms of her innovation in products and organizations, as she also took an interest in improving the working conditions of her employees. Defined by \"her great and creative imagination\" (Buitoni, 1992, p. 52), Luisa did not merely own assets or property but made full use of them. She was free to make decisions and move freely around the company, managing and combining company resources to the best of her abilities (Cesarani and Covino, 1997). Thus, she was the basic motor of Perugina and the freewheel of other business initiatives. Nonetheless, she lived in a social and a cultural context in which businessmen were the accepted standard and normality and women were excluded from having an active public life outside the household. Even when they were employed, they were mostly marginal figures. The institutional context appears to have been completely against the development of Luisa's entrepreneurial role within Perugina. Some years later, the foundation of her eponymous angora wool factory was in spite of the Italian legal system and institutions. Through marital authorization and the ban on female workers doing certain jobs, the authorities discouraged women to reach independence and free themselves from male figures such as fathers and husbands.\n8. Concluding remarks: The recent growth of studies on female entrepreneurship and businesswomen in a historical perspective has led to the necessity of introducing other dimensions of business (family and small and micro businesses) and an adequate approach to this phenomenon. This leads to the analysis of other factors in addition to the traditional individual-centric model generally used to explain entrepreneurial behaviours (Pathak et al., 2013). Furthermore, in spite of growing attention to the topic, there are still some lacunae in the literature. There is a predominance, in the international panorama, of publications on US and UK experiences. Further, one finds a general marginalization of gender studies in the wider context of accounting, management and business history (Schwarz, 2015; Sharpe, 1995; Walker, 2008). The aim of this work is to contribute to this topic through the examination of the history Luisa Spagnoli, one of the most interesting female Italian entrepreneurs of the early twentieth century. Though a single-case analysis hinders generalizations, I believe that this study contributes to the scholarship on women's role in business history and respects the general purpose of management history. This \"can - indeed, must - tell stories that contribute, in whatever small way, to humanity's economic and social progress\" (Bowden, 2016, p. 124).\n",
"role": "user"
},
{
"content": "Purpose: This paper provides a historical case study, through the analysis of Luisa Spagnoli's entrepreneurial life. Luisa Spagnoli was one of the most famous Italian businesswomen of the twentieth century, founder of \"Perugina\" chocolate factory and creator of \"Luisa Spagnoli\" fashion firm. The study aims particularly to examine the role of Luisa in the development of her businesses within the wider context of Italy of the 1900s, and to verify if and how gender has influenced the meaning and the shape of her entrepreneurial initiatives over time.\nDesign/methodology/approach: This study offers a historical analysis of entrepreneurial life of Luisa Spagnoli, developed through an archival study in a synchronic view. An interpretive historical method is adopted to deepen and better understand the links among personal, cultural, social and institutional domains.\nFindings: This study contributes to the scholarship on businesswomen's role in history and underlines the role of personal perceptions of female entrepreneurs to overcome external barriers.\nResearch limitations/implications: The limitations of this study concern the nature of the analysis itself, which is a single-case study.\nOriginality/value: This analysis highlights the centrality of personal self-perceptions to face up to the difficulties of an unfavourable context, contributing to create the pre-conditions necessary to become an entrepreneur.\n",
"role": "assistant"
}
] |
Self-citations, a trend prevalent across subject disciplines at the global level: an overview
|
[
"Citations",
"Continents",
"Author self-citations",
"Countries",
"Journal self-citations",
"Self-citations"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Citing a published research work having relevance in one or the other way with an upcoming research work is an old age practice followed all across the world. Citing a work of another author in one's own work is seen generally for two reasons: first to present the idea of research already undertaken in the given field and, second, the practice of citing others works while reviewing the literature and representing the findings of earlier studies undertaken in the similar filed by other researchers actually helps one to form the basis of any new study, hence the research. Besides, by putting forward the earlier findings helps a researcher to validate his/her concept of study and to authenticate his/her findings. But over the period of time, citation metrics have assumed such a great significance and importance in the research world that even it will not be perhaps inappropriate to say that a researcher in the present day world is not known by the amount of research work he/she did in his/her lifetime but the number of citations he/she has received for the work done.\nReview of literature: The importance of self-citation is very important from the research point of view (Van Raan, 1998) as one cannot neglect self-citations, especially if it is to perform corrections and to avoid distortions. Six Anesthesiology journals were evaluated by analysing 3,144 research articles published in them from 1995 to1996, and the researchers observed that of the total references included in the Journal of Anesthesiology, 57 per cent were self-citations 57 per cent, the highest among other journals of the Anesthesiology (Fassoulaki et al., 2000). The authors made an overall observation that a high self-citing rate of journals makes a positive impact on its IF. There is a strong correlation between the practice of self-citations and the co-authorship observed while assessing the scientific output of Norway for the period 1981-1996 (Aksnes, 2003). The author further observed that there is a significant decrease in the self-citations with the growing time of windows. While examining the research output of oncology during the period 2003-2012 (Pandita and Singh, 2014) observed that of the total citations received by oncology publications during the period of study, 28.62 per cent were self-citations, whereas at the global level, North America emerged the leading continent to have 63.57 per cent self-citations, followed by Europe and Asia with self-citations percentage of 23.59 and 11.26 per cent, respectively. The authors further observed that the scenario of self-citations at continental level is almost on the similar lines, as on average 9.28 self-citations were observed in each oncology publication from North America, 2.84 from Europe and 2.39 from Asia.\nObjectives of the study: to understand and examine the trend of author self-citation prevalent across different subject disciplines across the world;\nMaterials and methods: The study has been undertaken on the secondary data, retrieved from the official website of the SCImago Journal and Country Ranking on 21 June 2014 accessible at www.scimagojr.com/index.php (SCImago, 2014). The study is based on the author self-citations practices, which have been computed against:\nResults: To perform simple expressions such as addition, subtraction, division, drawing percentage, MS Excel and other statistical and mathematical tools and techniques were used. Percentage at most of the places has been drawn up to two decimal places and has not been rounded off, as such may reflect a slight variation while computing data for 100 per cent figure (Table I).\nFactors leading to self-citation: Although there can be numerous reasons which play its part in self-citing a researcher's own work, mostly depending upon each individual reason for which an author is citing his/her own work and even justifying the same, some of the reasons identified during the process of review and analysis for which researcher and journals mostly self-cite include the following:\nConclusion: There is no general consensus among researchers across the world as how far the practice of self-citation is purposive or others, is it merely done for the sake of citation count, leading to increase in h-index and IF of journals or even for that matter of authors. As these metrics are mostly evaluated for appointment and promotional purposes in academics across the world (Gianoli and Molina-Montenegro, 2009), citation analysis of research publications across disciplines has also grown to a considerable level; higher the number of citations a publication receives, more the work is rated as qualitatively. By receiving the highest number of citations is also somewhere an indicator that the work is being acknowledged by a good number of peers and this ultimately helps in brandishing the researcher among his/her peers.Although the arguments put forth generally to support the practice of self-citing is about carrying forward the work by a researcher of his/her earlier works or continues to further his/her research findings in a chosen field. Besides, citing one's own work somewhere particularizes the research interest of a person. But it is being observed that authors can easily manipulate citations to improve their impact metrics like h-index. Self-citations are being seen as one of the easy ways to inflate IF or h-index. But surely to inflate the IF or h-index is not the only reason for self-citation, as many a times, it has been observed that without self-citing, authors find it difficult to carry forward their own research work.\n",
"role": "user"
},
{
"content": "Purpose: The present study aims to determine the prevailing trend of self-citations across 27 major subject disciplines at global level. The study also examines the aspects like percentage of self-citations in each individual subject discipline and the average number of self-citations per publication across different subject disciplines. The study also investigates self-citation percentage of research articles published from the 20 leading research countries of the world and across the continents.\nDesign/methodology/approach: The study is supported by empirical findings undertaken on secondary data retrieved from SCImago Journal and Country Ranking, which is entirely based on the SCOPUS data source (SCImago, 2014).\nFindings: In all, 76,634,557 citations were received by as many as 14,946,975 research articles published from 2008-2012 at an average of 5.12 citations per article. Of the total citations received, 26,404,609 (34.45 per cent) were self-citations, which means that of the total citations received by each research article, 1.76 are self-citations. Compared to subject disciplines falling under social and humanistic sciences, pure and applied sciences have shown a greater trend of self-citation. On average, 4.18 self-citations were observed in each research article published in multidisciplinary subject disciplines. Of the total citations received by research articles published in the discipline of Psychology, 43.69 per cent are self-citations, the highest among all the subject disciplines under study. Of the total self-citations received by all the subject disciplines under study, 18.43 per cent were received alone in medicine, highest among all, whereas Social and Humanistic sciences received less than 1.00 per cent self-citations, the lowest among all the subject disciplines.\nOriginality/value: This study is original and first of its kind covering each individual subject discipline having global scope.\n",
"role": "assistant"
}
] |
Scaffolding and supporting use of information for ambitious learning practices
|
[
"Problem-based learning",
"Information seeking",
"Resource integration",
"Sense-making",
"Scaffolding",
"Ambitious learning practices",
"Complex problem solving",
"Problems of practice"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: True knowledge - understanding - develops through exploration, rumination, interpretation, judgment, and the application of information. Thoughtful work on projects and problems requires roaming through complex resources, seeking inspiration, messing around, making missteps and mistakes, and experiencing serendipitous discoveries. This kind of student learning and the in-depth interactions with teachers that it entails requires time. (Hawkins, 1997).\nAmbitious learning practices: an overview: Enabling contexts\nProblem-based learning: one form of ambitious learning: PBL is a student-centered inquiry approach to learning that begins with a dilemma to be addressed as students work in small collaborative groups with the teacher serving as a facilitator of learning. For example, students may work in small groups to determine the cause of fish death in a pond (Saleh et al., 2018; Sinha et al., 2015) or explaining the cause of a patient's illness (Hmelo-Silver and Barrows, 2008). The heart of PBL is a rich problem context that enables generative discussions but also includes various information resources and scaffolding (Ertmer and Glazewski, 2018). We consider this an ALP because PBL provides opportunities for student agency as they engage with meaningful inquiry tasks.\nWorking in a problem-based learning space: PBL is a student-centered inquiry approach to learning that begins with a dilemma or problem that drives the process as students work in small collaborative groups with the teacher serving as a facilitator of learning. As such, scaffolding has long been the terrain of PBL scholars in the learning and instructional design sciences, who have crafted definitions and processes of scaffolding in the research corpus over decades (Tabak and Kyza, 2018; Belland et al., 2008). As in most ALPs, PBL changes the role of teacher and students, as the teacher's role is to support and scaffold student learning rather than deliver content, while students become responsible for the learning. These changes in roles are demanding for both the teacher and the learner (Hmelo-Silver, 2000; Hmelo-Silver et al., 2018a). Here, we use PBL as an example to show how these features are enacted, though as we have noted, ALPs comprise more than PBL.\nCritical information literacy demands on the learner: In a typical cycle, PBL requires that learners engage both broadly and deeply with a wide range of content (Hmelo-Silver, 2015; Saye and Brush, 2017). At the beginning, learners are introduced to the problem and general parameters. They then work collaboratively to frame the problem, identify learning goals and needs, pursue learning issues, and construct an argument or solution (Belland et al., 2008). This requires learners to build a knowledge base through successful access, management, and use of information, all of which requires targeting learner agency through specific skills and strategies, discussed in more detail below.\nScaffolding and supporting critical information literacy in problem-based learning: It is important to recognize that learners need support for working in a PBL space, and the integration of problem-solving and information seeking strategies may contribute to learner success. Kuhlthau et al. (2015) emphasized the need for teaching and supporting strategies at a fine grained level and highlighted the fact that teachers can draw attention to and reinforce specific search strategies until learners have internalized them. In one example, Maniotes and Kuhlthau (2014) illustrated the classroom potential by beginning with a local problem of an invasive plant species taking over walls and flower beds at the school. By showing images and asking students to discuss observations, the introduction served to ground the inquiry, frame it in a way that was relevant, and motivate next steps or questions. Once teams assumed more independence toward their project, they could move through a sequence that enabled both meaningful questioning and grounded research across a variety of topics and disciplines.\nCurating resources: Given the challenges and opportunities in searching and seeking for information, it can be helpful to start with curated resources. In the medical school PBL model, the significant prior knowledge and interest of the students can help bootstrap the information seeking process in more relevant and focused ways. For learners with lower prior knowledge, helping narrow the search space may be important to provide access to ALPs for a broader range of students (Derry et al., 2006). Hmelo-Silver et al. (2009) provided a learning sciences hypermedia to help students get started in in a PBL course for pre-service teachers. They developed curated resources for both presenting their case materials as video cases and texts for engaging in self-directed learning. Multiple hypertext pages were linked to other relevant concepts and to the videocase materials. Jeong and Hmelo-Silver (2010) found that more effective groups engaged more fully with appropriate resources whereas students in less effective groups tended to just briefly skim relevant materials. In K-12 settings, the Globaloria guided discovery program included a wiki with resources that learners can draw from (Reynolds and Caperton, 2011).\nDesigning for information seeking in the context of ambitious learning practices: A key strength of PBL and other ALPs is the focus on searching for, synthesizing, and applying new knowledge. Being able to work with information is an important twenty-first-century skill (Chu et al., 2017). However, searching for information can prove to be quite challenging (Kuhlthau et al., 2008). In this day and age, information literacy is digital literacy.\nSense-making from sources in ambitious learning practices: In any PBL investigation, students are responsible for simultaneously investigating goals and managing information. In this context, possibly the most challenging and difficult task for students is one of sense-making, which is internal and socially constructed (Ertmer and Glazewski, 2018; Hmelo-Silver and Barrows, 2006, 2008). Nonetheless, there are facilitation strategies for scaffolding internal sense-making that may help learners. These include making judgments about information relevance; sharing and comparing to refine understanding; and making connections toward integrated knowledge.\nMaking connections toward integrated knowledge in ambitious learning practices: One of the more well-documented and consistent outcomes of PBL specifically and ALPs more broadly is reflected in what generally falls in the category of skill application or critical reasoning (Strobel and van Barneveld, 2009; Walker et al., 2015). More specifically, in studies that compare PBL against lecture-based pedagogies, PBL is generally more favorable when outcomes measures include complex measures of knowledge application, explanation, and problem solving However, it is important to note that such findings are not spontaneous outcomes, but, rather, associated with intentional pedagogies that help learners to make connections across information sources and meaningfully integrate knowledge. Goldman and Scardamalia (2013) noted the challenges in fostering meaningful knowledge building, articulating some of the problems that arise because specifically and especially because we are asking learners to make sense of large amounts of information from multiple sources by multiple authors, much of it conflicting or inconsistent.\nMoving forward with advanced learning practices: In our own search for preparation of this article, it seems that information searching, sourcing, and management represent critical efforts to ensure learner success in PBL specifically and ALPs more generally. Furthermore, we argue these efforts represent an area deserving of deeper investigation. Although we might know a lot about strategies for scaffolding generally, we know comparatively little about how to support learners for meaningful information integration processes. Additionally, different disciplines use different language to talk about learner actions, interactions, and needs. For the information sciences, information searching helped direct us to an appropriate literature. In the learning sciences, the same search terms for our review were not fruitful but rather appeared under the research area of multiple source comprehension. For the instructional design community, this work would be found under information problem-solving, providing challenges in creating a synthesis toward a more comprehensive and unified understanding of ALPs.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to lay out the goals and challenges in using information for ambitious learning practices.\nDesign/methodology/approach: Through a review of the literature, the authors integrate across learning, information sciences and instructional design to identify challenges and possibilities for information searching and sense-making in ambitious learning practices (ALPs).\nFindings: Learners face a number of challenges in using information in ALPs such as a problem-based learning. These include searching and sourcing, selecting information and sense-making. Although ALPs can be effective, providing appropriate scaffolding, supports and resources is essential.\nOriginality/value: To make complex ALPs available to a wide range of learners requires considering the information literacy demands and how these can be supported. This requires deep understanding and integration across different research literature areas to move toward solutions.\n",
"role": "assistant"
}
] |
Scenistic performance tools for management and employee development
|
[
"Learning",
"Scenistic methods",
"Skill development"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: This article explains and demonstrates how scenistic methods of training and development serve as important vehicles for management or leadership development. The term, scenistic, represents a class or group of methods, such as use of scenarios, incidents, case-studies, episodes, or stories. In this article, the words \"scenistic methods and/or approaches\" are labeled scenistic learning approaches and SLA is used for both shorthand and consistency purposes. As reported in recent research, SLA have been used almost exclusively for the improvement of employee performance in different organizational contexts (Bandura and Lyons, 2012; Lyons, 2011, 2010b). This article explains and demonstrates how SLA may enable managers to learn and improve individual competence across different areas including coaching, communication, team building, and problem solving, among others.\nBackground: The development, training and improvement of all employees for talent enhancement is important as evidenced by recent research (Garavan et al., 2012). \"Management training and development remains a critical element helping to drive competitive advantage in organizations\". Management training and development is a very large enterprise and the identification of proven methods of enhancing manager learning has value. For example, Patel (2010) estimated that $135 billion dollars were spent annually on training and development in the USA.\nWhat is a scenistic learning approach (SLA)?: Recent research demonstrates a variety of methods that can be labeled, scenistic (Labour, 2004; Leleu-Merviel et al., 2002; Lyons, 2008, 2007). By themselves, the use of incidents, cases, episodes, and related material in development and training are not new or different. What is clearly different from other approaches is the use of this type of material in the creation, application, and refinement of script-type behavior guides for learners to use on-the-job. Second, the application of the methods, aimed at performance improvement, is grounded on a partnership of managers and employees who learn together. In scenistic approaches, the learners, managers and employees, are the chief contributors to creation of the behavior or performance guides.\nTheory bases for SLA: Constructivism\nThe general components of scenistic methods: The process consists of several steps arranged in a sequence that provokes the interest and motivation of learners. Within the components that follow, the terms \"employees\" and \"learners\" are used interchangeably. The information provided in this segment is thought to be sufficient to permit practitioners to adopt and adapt the approach for immediate application.\nSummary and conclusions: As described in this article, \"scenistic learning approaches may be used in ways that go well beyond the standard training activities for knowledge and skill development\". Depending on purpose and goals, the methods are useful for performance improvement, personal development, manager development, and/or organizational development. The methods can help managers develop different types of generic competencies (Garavan et al., 2012). The applications are limited only by the desires, skills and creativity of the users.\n",
"role": "user"
},
{
"content": "Purpose: The purposes of this article are to familiarize the reader with scenistic learning tools, provide theory bases for the tools, present details for use of the tools in an organization, and offer details with reference to the efficacy of scenistic learning tools.\nDesign/methodology/approach: The approach taken is to provide sufficient information for a practitioner to apply scenistic tools with confidence. The paper provides the background and explicit details for using scenistic tools in practice.\nFindings: The findings demonstrate positive effects of training with scenistic tools. The effects are limited to specific areas.\nPractical implications: Several different instructional and/or training approaches may come under the rubric, scenistic, to include case analyses, use of critical incidents, story board and others. This article offers a complete model of how to create and apply a scenistic approach in an orchestrated manner to achieve positive change in an organization.\nOriginality/value: The article demonstrates how managers, training staff, and employees (trainees) work in a partnership to create and apply script-based practice guides and interventions to assist in solving actual organizational issues and problems.\n",
"role": "assistant"
}
] |
"New capitalism", colonisation and the neo-philanthropic turn in social policy: Applying Luhmann's systems theory to the Big Society project
|
[
"Big Society",
"Colonisation",
"Luhmann",
"Neo-philanthropy",
"Systems theory"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In an era of mobile capital, global economic crisis and domestic financial and fiscal austerity, many countries throughout Europe are currently re-assessing the relationship between their voluntary sector and the state in the field of social services as part of a broader re-alignment of state-civil society relationships (Ketola, 2012; Pestoff, 2012). Even in the strong social democratic welfare systems of Denmark (Villadsen, 2007, 2008, 2011), Norway (Tranvik and Selle, 2005), Finland (Cepel, 2012), and The Netherlands (Dekker, 2009), critical attention has turned to examine the consequences of this development both for the voluntary sector itself and the enduring commitments of the state in the field of social welfare.\nNew capitalism and the welfare state: Governments frequently invoke international markets, and the need to maintain global competiveness, as factors justifying welfare state retrenchment and a smaller public sector. The current variant on this argument is that governments can only maintain low state borrowing through the adoption of austerity measures which signal to the capital markets, and the international ratings agencies, that social and public expenditure will be reduced and maintained at a relatively low level in relation to measures such as GDP. This, it is assumed, is necessary to assuage a mobile international capital from migrating to developing regions and to allow the private sector to flourish and the public deficit to be reduced. However, there is evidence that Western governments are over playing this argument and exaggerating the extent to which mobile international capital presents a challenge to autonomous nation state management of the domestic economy. For example, Doogan (2009, pp. 71-72), drawing on United Nations Conference and Trade Development (UNCTAD) data on foreign direct investment (FDI), points out that transnational corporations (TNCs) which are headquartered in countries with large domestic economies such as the UK have little incentive to migrate. Indeed, the transnational indicators produced by UNCTAD of the top 100 TNCs, which measure the average ratios of foreign assets to total assets, foreign sales to total sales and foreign employment to total employment, indicate that the top TNCs have an average transnationality index of 50 per cent suggesting relatively low levels of foreign investment. In addition, the UK Government's expedient use of the wider global environment for essentially strategic short run domestic political purposes is evident in its use of the assessments made by the ratings agencies of whole country creditworthiness as justification for its public deficit reduction programme; the questionable effectiveness of the ratings agencies, historically and now, in predicting and assessing the \"real\" global economic and financial environment is overlooked because it is a convenient source of ideological justification for austerity (Mullard, 2012). At the heart of this argument is a view that governments must maintain a low corporate tax regime in a competitive global economic environment in order to discourage highly mobile financial capital engaging in what has been called \"regime shopping\" or \"regulatory arbitrage\" (Genschel, 2004; Hay and Rosamond, 2002). Is this view justified?\nThe Big Society project as the colonisation of the social lifeworld: The aim here is to theorise the emerging relationship between the political welfare system and those parts of the third sector most susceptible to marketisation and the social market economy by drawing on Niklas Luhmann's functional structural systems theory together with Habermas's observations on the colonisation of the lifeworld. Colonisation as a concept is deceptively simple, especially when used to describe relationships between policy actors in civil society and the state. By drawing on Luhmann and Habermas it is hoped that some of its complexity can be made visible.\nThe state, neo-philanthropic networks and the privatisation of welfare: At the centre of the Big Society strategy is an attempt to resolve this problem of the de-differentiation between the political welfare sub-system and the politicised environment of the social lifeworld by inserting the voluntary and charitable system into the interstices between state and civil society as a mechanism for neo-liberal governance. The structural coupling of the political system to emerging networks and consortia of wealthy philanthropic donors and voluntary organisations is, in part, an attempt to address this by establishing an \"at arms length\" relationship in which governments can \"manage\" the non-state sector through strategic, but limited, funding and administrative coordination. It is, however, politically deceitful. As Villadsen (2011) observes, the perennial issue for many of those working in the voluntary sector has been how to prevent the state from \"invading\" their territory: bureaucratising operations by the imposition of administrative controls, budgetary constraints and levels of professionalisation that make voluntary organisations appear as state outposts. This \"competition paradigm\" for describing state-voluntary organisation relations is being ideologically re-constructed as a \"cooperative paradigm\" inside the Big Society framework. However, this, as I have suggested above, contains elements of inauthentic political practice aimed primarily at colonising rather than empowering the third sector.\nConclusion: It has been argued that the Big Society initiative is less about civic renewal, although there are undoubtedly elements of it that are, and more about the privatisation (not simply the commercialisation) of social problems. Arguments that justify this development by appealing to the internationalisation of the economy are deceitful and disingenuous because the evidence that national governments are undermined by a mobile international capital is weak. Luhmann's functional structural systems theory is a useful theoretical perspective for making visible the changes that are forced on charitable organisations through their communicative interaction with the political system which ultimately leads to a process of colonisation taking place. The real implication of these developments for the character of the voluntary sector requires further critical examination not only because the state's enduring commitment to welfare may be in question but also for the growing significance of entrepreneurial philanthropy in shaping the character of volunteering and charitable activities and welfare relationships. As Villadsen (2011, p. 1071) argues, using Foucault to analyse modern philanthropy in Denmark, we may be witnessing a European wide process of greater \"civil society-based regulation\" which requires \"appropriate conceptual frameworks suitable for interrogating the complex interrelations\" between state and civil society emerging in. My preference is to draw on Luhmann and Habermas to illuminate this issue. The Big Society project in the UK is fundamentally about the governance of philanthropy in the clear sense that it is creating an infrastructure of principles and institutional arrangements that allow the \"penetration of forms of economic and administrative rationality into areas of action that resist being converted over to the media of money and power\" (Habermas, 1987, p. 331). This process will require to be monitored and its impact critically evaluated in the coming years.\n",
"role": "user"
},
{
"content": "Purpose: Drawing on the functional structural systems theory of Niklas Luhmann and the theory of colonisation of the social lifeworld advanced by Jurgen Habermas, it is argued that the Big Society project in the UK is about the creation of an alternative non-state welfare infrastructure by the linking of wealthy donors with opportunities to engage in venture philanthropy in the third sector. The paper aims to discuss these issues.\nDesign/methodology/approach: The paper outlines key features of Luhmann's theory relating to autopoiesis and structural coupling to illustrate its conceptual strength in making the colonisation process visible.\nFindings: The paper illustrates the ways in which system imperatives underpinning the colonisation process absorb and subordinate public activism to narrow market principles forcing the third sector to communicate in the language of the market rather than caritas.\nSocial implications: The real implication of these developments for the character of the voluntary sector requires further critical examination not only because the state's enduring commitment to welfare may be in question but also for the growing significance of entrepreneurial philanthropy in shaping the character of volunteering and charitable activities and welfare relationships.\nOriginality/value: The paper applies Luhmann's theory relating to autopoiesis and structural coupling to make key features of government colonisation of the third sector visible.\n",
"role": "assistant"
}
] |
School as setting to create a healthy teaching and learning environment: Using the health promoting school model to foster school-health partnership
|
[
"Mentoring",
"Professional development",
"Professional capital",
"Knowledge mobilization",
"Professional learning community",
"Collaboratives"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nPartnership: professional cultures in school to improve student well-being: Introduction: health promoting school (HPS) and implication for better education\nApproach to building school-health professional communities within the HPS: The HPS model is hypothesized to cultivate the professional cultures in the school setting to promote better student health and well-being. This section describes the linkage with education reform, literature review of HPS framework, and strategies for tackling the challenges.\nMethods: The initial stage was to design educational changes for HPS implementation by demonstrating how the key features of HPS matched with effective schools, and then build up educator capacity so they can be fully engaged during the process of establishing the paradigm shift in school health promotion. The Hong Kong Healthy School Award (HKHSA) Scheme is described as a method of intentional intervention and an effective HPS model, not only for improvement of health and the functional status of students, but also for managing an emerging health crisis in the school setting. Table I summarizes the review of evidence in supporting the development of HKHSA as an effective HPS model, striving for better education and health outcomes for students, school improvement, different types of formative and summative assessment research, as well as review studies to develop the model and evaluate the outcomes of HKHSA. The studies include papers by global experts discussing the strong linkage between HPS and school effectiveness in teaching and learning, also how to drive educational changes in a school setting, and papers discussing how these perspectives were addressed in a local context. Studies also include papers by authors and international experts in developing a system of monitoring and evaluating of HPS, and reports of Audit Commissions on its effectiveness. Studies have been conducted to compare the outcomes between schools achieving a good standard of HPS and schools not engaged in the HPS movement.\nFindings: The reported findings are not only limited to the improvement of student health and well-being, which has strong implications for teaching and learning, but have also captured changes in school culture, environment, policies, and organization to be more conducive to student health. The improvement of student health and well-being, and school culture and organization as result of HPS provides additional evidence as to the importance of elements of HPS in school effectiveness. The case study during Severe Acute Respiratory Syndrome (SARS) not only highlights the significant impact of HPS to improve hygienic and preventive measures, but also demonstrates how an unexpected health crisis affecting not only the school setting, but society as a whole, has led to stronger human and social capital among teachers to enhance school policies for decision-making during crisis situations. The case studies for obesity prevention also demonstrate the benefits of partnership with the health sector in strengthening the health curriculum, improving health literacy, and providing life skills training for students, as well as parental education. Both have also shown how the HPS framework can be adapted for different health crises facing schools, as well as the enhancement of stronger community links and partnerships.\nPractical implication: Hong Kong schools have been known for rote learning, teacher-centered, content-based, and examination oriented pedagogy (Biggs, 1995; Dimmock, 2000a), with a gradual shift toward more constructivist and student-centered teaching and learning in recent years (Dimmock, 2000a, b). The education reform around the turn of the century has called for more creativity, emotional intelligence, social skills, resilience, and problem-solving capabilities. For this reason, the concept of health has grown in importance along with the desire to embrace a well-rounded education as a healthy lifestyle. This is evidenced by its inclusion as one the seven key learning goals (Curriculum Development Council, 2001).\nOriginality/value: A recent special issue of HPS by one leading international health promotion journal has pointed out that very few schools are able to implement HPS in its entirety (Adamowitch et al., 2017; Joyce et al., 2017). It is then difficult to test the theoretical assertions as to the importance of monitoring data such as audits adopted in HKHSA and how these data monitoring motivates change (Joyce et al., 2017). The HPS movement in Hong Kong has established a sizeable network of schools adopting the HKHSA system over two decades and several studies have already observed positive changes (Lee et al., 2006, 2008, 2014, 2016). This paper discusses not only the need to add health as a central component of educational change, but also how Hong Kong has informed and shaped professional communities in schools to enhance students' well-being and achievement. The framework has instilled a sense of collective responsibility for the whole child, from professional development and team building, to leadership in schools. Continuing development of HPS in Hong Kong would add value to international literature, providing insights as to which types of data would influence adoption of HPS in which types of school and policy contexts as raised by Joyce et al. (2017).\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to analyze how professional cultures in schools and school systems could improve the well-being of students, with a particular emphasis on teacher-health partnerships, which would not naturally occur without a specific intentional intervention. Implemented with a whole-school approach, the Health Promoting School (HPS) is one of the most effective intentional interventions to achieve improvements in both the health and educational outcomes of students through the engagement of key stakeholders in education and health to create a healthy physical/psycho-social environment. This paper emphasizes collaboration and the building of professional cultures in schools that share collective responsibility for the whole student.\nDesign/methodology/approach: Student outcomes in schools should include both academic and health and well-being outcomes that promote positive pathways throughout adulthood. This paper connects HPS research with policy analysis drawing on Hong Kong's unique context as being at the top of the PISA rankings and striving toward a positive health culture and well-being in its schools.\nFindings: Evidence has been gathered extensively about what schools actually do in health promotion using the HPS framework. The HPS framework has served to assist schools and authorities to concentrate on the gaps and affirm best practices. This paper also reports how teachers have created a professional and collegial community with health partners to address outbreaks of infectious diseases in schools and obesity in students.\nPractical implications: The concept of HPS can serve as an ecological model to promote the positive health and well-being of students, fostering their personal growth and development, and as an alternate model for school improvement.\nSocial implications: This paper has highlighted that structured school health programs such as HPS could have positive effects on educational outcomes, while also changing professional cultures and communities in schools with an emphasis on students' physical health, emotional health, social health, or spiritual health. The Assessment Program for Affective and Social Outcomes is used as a tool by schools in Hong Kong, reflecting the affective and social developments of the students in the school under review as a whole, and how they relate to the school. It resembles the core areas of action competencies, and school social environment; the two key areas of HPS.\nOriginality/value: Hong Kong is often analyzed from an educational rankings perspective. However, it offers broader lessons on educational change, as it has in recent years emphasized dual goals in student outcomes and professional communities - the importance of whole student health and well-being as a both a precursor and key component to the educational outcomes schools seek. Globally, very few schools are able to implement HPS in its entirety. Continuing development of HPS in Hong Kong would add value to international literature in terms of which types of data would influence adoption of HPS in which types of school and policy contexts.\n",
"role": "assistant"
}
] |
Managing foreign exchange risk among Ghanaian firms
|
[
"Foreign exchange",
"Risk management",
"Hedging",
"International trade",
"Ghana"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Every business activity is confronted with some risk or the other and coping with risk has always been an important managerial function. In recent years, however, risk management has received increasing attention in both corporate practice and literature. This is particularly true for the management of financial risks, i.e. the management of foreign exchange risk, interest rate risk and other financial market risks. This present study focuses on the management of foreign exchange risk.Foreign exchange risk management has become increasingly important since the abolishment of the fixed exchange rate system of Bretton Woods in 1971. This system was replaced by a floating rates system in which the price of currencies is determined by supply and demand of money. Given the frequent changes of supply and demand influenced by numerous external factors, this new system is responsible for currency fluctuations. These fluctuations expose companies to foreign exchange risk. Moreover, economies are getting more and more open with international trading constantly increasing and as a result companies become more exposed to foreign exchange rate fluctuations. Firms involved in international trade are subject to transaction risk arising from payables and receivables in foreign currencies. Also, multinational firms with operations in several countries will have translation risks from having assets and liabilities denominated in foreign currencies (Bradley and Moles, 2002).Empirical research shows that profits of multinational companies are affected by volatile floating foreign exchange rates. Nevertheless, small firms trading exclusively in their domestic markets also become increasingly exposed to foreign currency fluctuations. Actually, small firms depend on the volatility of the main currencies because many of them outsource their production to foreign countries. This means that they incur costs in foreign currency (wages, taxes, material, etc.) and they also need to manage this exposure. Other small firms are exposed indirectly given that their strategic position can be affected by volatile foreign exchange (Popov and Stutzmann, 2003). Foreign exchange risk management is thus crucial for companies frequently trading in the international market either as multinationals or simply involved in import and export trade.Previous studies on the subject, however, have tended to focus on exchange risk management practices of multinational corporations (Glaum and Roth, 1993; Batten et al., 1993). Little has been done with respect to firms involved in international trade (import and export). The present study reports the findings on the foreign exchange risk management practices among Ghanaian firms involved in international trade. The study focuses on how Ghanaian firms manage their foreign exchange risk and the problems involved in managing their foreign exchange risk. It also seeks to ascertain the extent to which these firms use exchange risk management techniques for hedging against foreign exchange exposure.The rest of the paper is organized as follows. In the next section, a brief overview of the main theoretical concepts of foreign exchange risk management is given. In the subsequent section, the research methodology is discussed. The empirical results are presented and discussed after that. Finally, the last section summarizes and concludes the discussion.\nTheoretical framework: Risk management is described as the performance of activities designed to minimize the negative impact (cost) of uncertainty (risk) regarding possible losses (Schmit and Roth, 1990). Redja (1998) also defines risk management as a systematic process for the identification and evaluation of pure loss exposure faced by an organization or an individual, and for the selection and implementation of the most appropriate techniques for treating such exposure. The process involves: identification, measurement, and management of the risk. The objectives of risk management include: to minimize foreign exchange losses, to reduce the volatility of cash flows, to protect earnings fluctuations, to increase profitability and to ensure survival of the firm (Fatemi, 2000).Foreign exchange risk is the risk that an entity will be required to pay more (or less) than expected as a result of fluctuations in the exchange rate between its currency and the foreign currency in which payment must be made. Foreign exchange risk is commonly defined as the additional variability experienced by a multinational corporation in its worldwide consolidated earnings that results from unexpected currency fluctuations. It is generally understood that this considerable earnings variability can be eliminated partially or fully at a cost, the cost of foreign exchange risk management (Jacques, 1981). Firms are exposed to foreign exchange risk if the results of their projects depend on future exchange rates and if exchange rate changes cannot be fully anticipated. Generally, companies are exposed to three types of foreign exchange risk: transaction (commitment) exposure, economic (operational, competitive or cash flow) exposure and translation (accounting) exposure. Transaction risk occurs where the value of existing obligations are worsened by movements in foreign exchange rates. Economic risk relates to adverse impact on equity/income for both domestic and foreign operations because of sharp, unexpected change in exchange rate. Translation risk is also related to assets or income derived from offshore enterprise (Glaum, 1990; Grant and Soenen, 1991; Madura, 2003).\nManaging foreign exchange risk: Foreign exchange risk can be managed in various ways. This section discusses techniques used for hedging against risk. Hedging can also be defined as \"all actions taken to change the exposed positions of a company in one currency or in multiple currencies\" (Prindl, 1976). According to Clark et al. (1993), hedging refers to the technique of making offsetting commitments in order to minimize the impact of unfavorable potential outcomes. The risk manager's choice of the different types of hedging techniques may, however, be influenced by costs, taxes, effects on accounting conventions and regulation. The different types of hedging techniques are discussed below.Payments netting\nResearch methodology: This study is based upon the premise that firms engaged in international trade are often confronted with foreign exchange risk. Foreign exchange risk management is, therefore, crucial for companies frequently trading in the international market. A sample of 100 firms was drawn from the Ghana Customs Excise and Preventive Service's database of importers. The sample was based on firms who are also registered with the Association of Ghana Industries (AGI) and imported at least twice a year for the last three years. The sample basically included firms involved in the importation of foreign inputs. The essence of including importers is due to the fact that importers are exposed more to foreign exchange risk in a net importing country like Ghana given the volatile nature of the local currency. However, most (60.3 percent) of the sampled firms were also into exports. Out of the 100 questionnaires sent out, 68 were received representing 68 percent response rate. The resulting response rate is high for a survey of this type. The responses were tested for non-response bias using a range of financial variables. This showed that the non-respondents had a somewhat lower proportion of imports and also cut across all the industries represented in this study. Since, there is no evidence of significant non-response bias, the findings of the study are not affected by the non-response bias. This means the results do not underestimate the true extent to which Ghanaian firms manage their foreign exchange risks.The survey instrument involved both closed-ended and open-ended questionnaires. The open-ended questionnaire sought to encourage respondents to share as much information as possible in an unconstrained manner. The closed-ended questionnaire, on the other hand, involved \"questions\" that could be answered by simply checking a box or circling the proper response from a set provided by the researcher (Fowler, 1993). While this method allows for easier analysis of the data due to standardized questions, its limitation is that it allows the researcher to determine only what the respondents are doing, not how or why they are doing it. Responses on the foreign exchange risk management (hedging) techniques were measured with a Likert-type rating scale. This helped in determining the extent to which the firms use hedging techniques.Since, most of the sampled importing firms were also into exports, the level of internationalization is measured by trade intensity (i.e. the ratio of export sales plus import sales to total sales). This definition is also a standard measure of trade intensity in the internationalization theory. Simple correlation analysis was used to establish the relationship between the trade intensity and the mean values of each foreign exchange risk management technique. Data obtained from respondents were entered into an SPSS database application for analysis. Descriptive statistics were basically used in the presentation and analysis of empirical results.\nPresentation and discussion of results: This section is devoted to the presentation and discussion of the results.Characteristics of sampled firms\nConclusions and recommendations: This study investigated the foreign exchange risk management practices among Ghanaian firms involved in international trade. The results of the study indicated that a little above 45 percent have no department or any one responsible for managing their foreign exchange risk. Foreign exchange risk is mainly managed by adjusting prices to reflect changes in import prices resulting from currency fluctuation and also by buying and saving foreign currency in advance. The main problems firms face are the frequent appreciation of foreign currencies against the local currency and the difficulty in retaining local customers because of the high prices of imported inputs which tend to affect the prices of final products sold locally. The study also revealed that a majority of the respondents never use hedging techniques in managing their foreign exchange risk. Overall, Ghanaian firms involved in international trade exhibit a low level use of the hedging instruments for managing foreign exchange risk. The reason may be attributed to the low level of education and sophistication among the firms' treasury personnel and also because of the under developed nature of the financial markets. Apart from a few banks (about five out of 20), most banks in Ghana do not have developed hedging products. This could be attributed to the volatile nature of the economy. The Ghanaian economy until the last four years was characterized by high inflation, high interest rates and lack of sufficient foreign currency. Even with the few banks (mainly multinationals), the derivative products they provide to their clients are mostly one offs. The elaborate documentations, risks and costs involved in using these hedging instruments could also account for the present situation.It is recommended that internationally oriented firms in Ghana must learn about these hedging techniques and begin to adopt them in managing their foreign exchange risk exposure. There is the need for Ghanaian firms to establish separate risk management department and engage the services of qualified treasurers and risk experts. It is also important for them to develop and implement a well functioning risk management system necessary for managing their entire risk exposure and ensure the survival of the firms. Ghanaian banks should also educate their clients on the importance of such hedging techniques in managing exchange rate exposure. Banks are encouraged to develop more hedging products to assist firms involved in international trade to manage their foreign exchange risk. The recent launch of the foreign exchange option product by Standard Chartered Bank should serve as a good example for other banks in Ghana.\n",
"role": "user"
},
{
"content": "Purpose: - This paper reports on the foreign exchange risk-management practices among Ghanaian firms involved in international trade. The study focuses on how Ghanaian firms manage their foreign exchange risk and the problems involved in managing exchange rate exposure. It also seeks to ascertain the extent to which these firms use foreign exchange risk management techniques.\nDesign/methodology/approach: - Descriptive statistics were used in the presentation and analysis of empirical results.\nFindings: - The results indicate that close to one-half of the firms do not have any well-functioning risk-management system. Foreign exchange risk is mainly managed by adjusting prices to reflect changes in import prices resulting from currency fluctuation, and also by buying and saving foreign currency in advance. The main problems the firms face are the frequent appreciation of foreign currencies against the local currency and the difficulty in retaining local customers because of the high prices of imported inputs, which tend to affect the prices of their final products sold locally. The results also show that Ghanaian firms involved in international trade exhibit a low level use of hedging techniques.\nOriginality/value: - The main value of this paper is the analysis of foreign exchange exposure management from the Ghanaian perspective. Relevant recommendations aimed at enhancing the foreign exchange risk-management practices among Ghanaian firms are made. The paper is useful not only to firms involved in international trade, but also to financial institutions interested in providing hedging products to these firms.\n",
"role": "assistant"
}
] |
Getting the most out of social learning: utilizing social media in the workplace
|
[
"Social learning",
"Social media",
"Learning technology",
"Workplace",
"Innovation"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The concept of social learning is not new. We have been applying it since the playground. What is new, however, is how pervasive it is about to become, thanks to the convergence of social media with advances in learning technologies, and their adoption by individuals and large enterprises alike.\nThe organizational tug of war: Chief learning officers want all corporate content to be readily available, rated, discussed and easily moved around the organization. Social media technologies can help do this. But the big investments in social media and knowledge assets typically reside in the domain of the CIO and the IT department (think SharePoint, Lotus et al.) rather than in the domain of the CLO. With content management systems, document management systems, and portals already in place, IT often pushes back against new forms of social media, saying \"use what we've got set up\".Likewise, learning management systems (LMS) traditionally manage content relating to regulation, standard operating procedures, and compliance, which is understandably carefully and centrally controlled. It has authority and doctrine behind it. It is typically been pre-built and pre-packaged for a formal classroom or on-line setting. If content is in the LMS, it is usually because the organization deems it necessary. It is important to establish what the IT department owns, and what the learning department owns.Whilst the LMS is focused on the organization's needs, social media tools are focused on the individual's needs and perspective. They provide a fast and easy means of informal knowledge sharing. Yet, by the very nature that makes them so useful, they are unregulated and uncontrolled, which can create some unintended and unwanted consequences. For example, a nurse could upload a video on a good approach to a particular procedure that he or she has learned informally, which is completely incongruent with the organisation's best practice guideline or legal procedures. Or, an engineer may circulate a video to his or her colleagues, showing how they found a quick fix for a certain piece of equipment, which may in fact invalidate the manufacturer's warranty or equipment safety levels. This level of enthusiastic individual endeavor is fantastic but it must be balanced with the rules of the organizations. This balance is less about the delivery mechanism, and more about understanding what is doctrine and what is not.\nMaking raw data permeable: What is needed is the best of both worlds. We need a way to rapidly capture raw data and turn it into doctrine in the LMS within the agreed organizational defined and managed guidelines, while at the same time encouraging expert content creation and knowledge sharing amongst the masses. Social media and LMS tools can offer a powerful way to take once raw data from all parts of the organization and make it permeable. Appropriately sanctioned content that can be created quickly rather than being prebuilt and formally pre-packaged for classroom training, has obvious advantages for both employees and employers.So how do you marry the two together? First, think of areas back in the pre-internet IT world about which you would have historically collaborated around the water cooler to gain advantage. A sales person sharing an experience of how they sold a product in a particular situation, for example, has very little danger or risk of regulatory non-compliance, but still has the high value of the collaboration.Identify these areas as an exercise to spot the areas where you can harness immediate and on-going value. By the way, the learning department should not perform this exercise, but rather the line of business managers. It is often helpful to think about the type of information in your organization that does not travel well in the business that you might send via email today. For example, an email explaining how a service manager successfully resolved a difficult customer service situation would be much more compelling as a short video, than an email trail. Then look at what social media tools already exist within the organization which can leverage this type of content. Decide how you will drive traffic and encourage participation with initiatives such as, competitions, rewards, \"forcing\" through changed process, well-planned programming schedules, middle management expectations, senior level management modeling, and social media events (live chats, team video jams etc). It is also important to ensure employees have the necessary skills and training in order to fully embrace whatever social media tools you are using.Second, do not adopt the expectation that simply putting technology in place will solve all your problems or create a social learning utopia. Social media tools need to be exploited and understood up front so you can properly pursue their effectiveness. They are the critical enabler to social learning, rather than the act itself. They should ideally be linked to your LMS. Social learning activities can then be linked to specific key performance indicators and performance reviews.Third, identify and ring fence those areas which have regulatory or legal implications to training and learning, and ensure there is no informal, maverick social learning taking place internally in these areas. It is a good idea to identify areas and topics that are never to be shared via social media, and have a plan in place in the event that these guidelines are breached. You need a loose policy in place in the same way you have a general social media policy in place for company blogs. The policy around external social media mirrors the same rules of engagement for internal social learning in terms of what is appropriate or not (see Box 1 for a summary checklist).Social learning can dramatically accelerate the way we absorb, share and apply knowledge in the workplace, providing we apply it sensibly, responsibly and specifically. Social media is far too viral and much too visible for us to learn this lesson the hard way.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to reviews the latest management developments across the globe and pinpoint practical implications from cutting-edge research and case studies.\nDesign/methodology/approach: - This briefing is prepared by an independent writer who adds their own impartial comments and places the articles in context.\nFindings: - The concept of social learning is not new. We have been applying it since the playground. What is new, however, is how pervasive it is about to become, thanks to the convergence of social media with advances in learning technologies, and their adoption by individuals and large enterprises alike.\nPractical implications: - The paper provides strategic insights and practical thinking that have influenced some of the world's leading organizations.\nOriginality/value: - The briefing saves busy executives and researchers hours of reading time by selecting only the very best, most pertinent information and presenting it in a condensed and easy-to digest format.\n",
"role": "assistant"
}
] |
She's already busy: An exploratory study of women's workplace attitudes as predictors of organizational citizenship behavior
|
[
"Organizational citizenship",
"Job satisfaction",
"Work engagement"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In this fiscal climate, organizations are challenged with fostering an environment encouraging employees to go beyond job requirements to achieve organizational success (or to maximize profits). The female segment of the workforce has been cited as crucial to organizational performance. One study of worldwide private and public companies suggests that companies with higher numbers of women at senior levels are also companies with better organizational and financial performance (Desvaux et al., 2008). Another study comparing the financial performance of 215 Fortune 500 companies found a strong correlation between company profits and the number of female senior executives in their ranks (Adler, 2002). Finally, a ground-breaking study examining the link between gender diversity in top management and USA Fortune 500 corporations' financial performance found that companies with the highest representation of women in their top management teams experienced better financial performance than those with the lowest female representation (Catalyst, 2004). Important to note, while these studies indicate a link between gender diversity and firm performance, they do not imply causation. However, Catalyst's (2004, p. 3) suggestion that \"the business case for women in management contends that companies that achieve diversity and manage it well, attain better financial results than other companies\" has merit. Thus, creating an environment where managers understand the positive impact women can provide may help them add more women in the leadership pipeline. One way to experience this positive impact is through women's participation in various non-mandated and uncompensated behaviors or organizational citizenship behaviors (OCBs).\nLiterature review: Organizational citizenship behaviors\nMethod and sample: An online survey was developed and sent to the alumnae of a private northeastern university. A random sample of 100 alumnae from each of the institution's five professional schools was selected and n = 160 surveys were returned for a response rate of 32 per cent. The survey consisted of 49 questions which included several demographic variables such as age, education level, income and job tenure. We issued an e-mail message from the one of the university's female trustees with a link to the survey during a four-week period.\nDependent measures: Organizational citizenship\nIndependent measures: Job satisfaction\nResults: Organizational loyalty citizenship\nDiscussion and implications: Our research clearly suggests that married women who are highly engaged in work are the employees with the greatest potential for performing OCBs overall and ultimately supporting the citizenship initiatives of the organization. However, this study's findings suggest different patterns for the organizational loyalty and helping citizenship equations when examining many other antecedent variables. In the following sections, we discuss the significant similar antecedents across the organizational loyalty and helping citizenship models, delineate the specific differences in each model and finally consider research implications.\nConclusions: The results of this exploratory study have four major conclusions: first, the study confirms the importance of examining OCBs by gender. While organizations expect female employees to perform more OCBs than male employees (Kidder and Parks, 2001), our study's results indicate that women do not engage in all types of OCBs, but that there are differences in their level of engagement by OCB type.\n",
"role": "user"
},
{
"content": "Purpose: This study aims to examine women's organizational citizenship behaviors (OCBs) or the voluntary, discretionary behaviors employees perform that are not linked to their reward system but benefit organizations. Specifically, it investigates several attitudinal and organizational antecedents relative to two sub-dimensions of OCB: organizational loyalty and helping behaviors.\nDesign/methodology/approach: Alumnae (n = 160) responded to an e-mail survey regarding their self-reported OCBs, job satisfaction, work engagement and several demographic and organizational variables.\nFindings: In this fiscal climate, organizations are challenged with fostering an environment encouraging employees to go beyond job requirements. Findings here suggest that married women who are engaged in work have the highest propensity to do this by engaging in these non-compensated, non-mandated behaviors. However, importantly, differences were found between organizational loyalty citizenship and helping behaviors. An inverse relationship was also found between job tenure and helping behaviors: an interesting result.\nResearch limitations/implications: An important implication of the research is the dissection and examination of two sub-dimensions of OCB (i.e. organizational loyalty and helping), providing a better understanding of the dimensionality of the phenomenon and how they relate to job satisfaction and work engagement for a significant segment of the American workforce: women.\nOriginality/value: This study examines the dimensionality of OCB (as called for by previous research) and establishes that not all OCBs can be treated equally, as antecedents vary in their predictability of OCB engagement. Further, this research investigates the relationship between individual job satisfaction components (pay, recognition and supervision) and OCBs to help clarify conflicting findings between OCB and this key workplace attitude.\n",
"role": "assistant"
}
] |
Employing proactive interaction for service failure prevention to improve customer service experiences
|
[
"Proactivity",
"Service recovery",
"Service failure",
"Proactive service",
"Reactive service",
"Service failure prevention"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The ability to create superior customer service experiences is well-recognized as a major differentiator in competitive markets (Homburg et al., 2015). Yet, maintaining high levels of customer service is an elusive goal largely due to the inevitability of service failure. Maxham (2001, p. 11) defines service failure as \"any service-related mishaps or problems (real and/or perceived) that occur during a consumer's experience with the firm.\" Although services marketing research continues to be largely focused on firms' reactive interactions for recovering from service failure (Van Vaerenbergh et al., 2014), studies indicate that service organizations' efforts to improve service experiences by proactively preparing customers for the adverse effects of service failure are favorably received (e.g. Mikolon et al., 2015; Parasuraman et al., 1991). Thus, researchers increasingly contend that service organizations can provide customers with superior service by placing more emphasis on proactive interaction (Challagalla et al., 2009; DeWitt and Brady, 2003; Mikolon et al., 2015; Potter-Brotman, 1994).\nBackground: Proactivity\nConceptual model and hypotheses development: As presented in Figure 1, our conceptual framework proposes that proactive interaction for service failure prevention and reactive interaction to correct service failure differentially influence customer emotion and patronage behavior. We hypothesize that customers who experience proactive interaction for service failure prevention display greater positive affect and engage in more beneficial patronage behavior than customers that receive a reactive interaction. In addition, we propose that these relationships are further influenced by customer- and firm-specific factors.\nModerating hypotheses: The requisite firm resources for service failure prevention are distinctly different to resources needed for service failure recovery (Potter-Brotman, 1994). As shown in Table I, proactive interaction inherently reaches and influences more customers than reactive interaction. Thus, Challagalla et al. (2009, p. 84) observe that, \"proactively reaching out to a larger number of customers and serving them takes personnel and monetary investments.\" However, the deliberate and significant devotion of managerial and employee resources required to serve customers effectively can reduce productivity and profitability (Anderson et al., 1997; Rust and Huang, 2012). Moreover, research indicates that the value of relationship marketing strategies varies depending on contingent factors (e.g. De Wulf et al., 2001; Palmatier et al., 2009).\nMethod: Design and participants\nResults: For Study 1, we analyzed positive affect, customer satisfaction and repurchase intention with an analysis of covariance. Employee effort and competence were covariates, and interaction type was a between-subject factor. H1 and H2a-b test the main effect of interaction type on customer affect (H1), satisfaction (H2a) and repurchase intention (H2b). The mean for positive affect was higher in the proactive (5.91) than in the reactive condition (4.95, F=11.02, p=0.00). Therefore, the analysis supports H1. Similarly, the effects of interaction type on satisfaction (Mp=6.32 vs Mr=5.33, F=16.70, p=0.00) and repurchase intention (Mp=6.27 vs Mr=5.18, F=14.09, p=0.00) are both significant, also providing support for H2a-b.\nGeneral discussion: Our research responds to previous contentions that service provider firms can offer customers superior service experiences by placing more emphasis on proactive interaction. Our findings reveal that customers prefer service providers that take the initiative to get to them before they have to initiate contact for themselves. More specifically, our research study contributes to the development of service provision theory and practice by expanding on previous studies which report that proactive efforts to prepare customers for the adverse effects of service failure are favorably received (Challagalla et al., 2009; DeWitt and Brady, 2003; Mikolon et al., 2015; Potter-Brotman, 1994). Our results also shed light on moderating factors that may further inform the exploitation of resource-intensive proactive interaction for service failure prevention.\n",
"role": "user"
},
{
"content": "Purpose: Services marketing research continues to be largely focused on firms' reactive interactions for recovering from service failure rather than on proactive customer interactions that may prevent service failure from occurring in the first place. Building on previous studies that assess the efficacy of implementing proactive interaction in service provision contexts, the purpose of this paper is to compare the influences of proactive interaction to prevent service failure and reactive interaction to correct service failure on customer emotion and patronage behavior. Since proactive interaction for service failure prevention is a relatively underexplored and resource-intensive approach, the authors also assess the moderating influences of customer and firm-related characteristics.\nDesign/methodology/approach: The study hypotheses are tested with survey data from two scenario-based experiments conducted in a retail setting.\nFindings: The findings reveal that customers prefer service providers that take the initiative to get to them before they have to initiate contact for themselves. The findings also identify the moderating influences of relationship quality, situational involvement, and contact person status and motive.\nOriginality/value: The research contributes to the development of service provision theory and practice by expanding on previous studies which report that proactive efforts to prepare customers for the adverse effects of service failure are favorably received. The results also shed light on moderating factors that may further inform the exploitation of resource-intensive proactive interaction for service failure prevention. An agenda is proposed to stimulate future research on proactive customer interaction to prevent service failure in service provision contexts.\n",
"role": "assistant"
}
] |
Knowledge protection in knowledge-intensive business services
|
[
"Knowledge management",
"Data management",
"Data security",
"Knowledge‐intensive business services",
"Knowledge protection",
"Survey",
"Italy"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: This paper examines the issue of knowledge protection (KP) in knowledge-intensive business services (KIBS). KIBS companies have been regarded as a distinctive trait of the knowledge economy and as such they have been the subject of an increasing number of studies (Doloreux et al., 2010). In particular, their crucial role in promoting innovation and development of local economies has been acknowledged since the mid-1990s (Pro Inno Europe, 2009; Tuominen and Toivonen, 2011; Corrocher and Cusumano, 2012).According to the extant literature, the functioning and the innovative capability of KIBS are strictly associated with the cognitive assets they possess (Miles et al., 1995; Strambach, 2008; Muller and Doloreux, 2009). First, their main production factor and outcome consists of knowledge, directly delivered to customers in the form of consulting, or embedded in artefacts and services. Second, their activity is mostly based on the exploitation of the specializations, skills and knowledge of their employees. Third, the provision of knowledge-intensive services requires an in-depth interaction between supplier and user, both involved in cognitive exchanges and learning processes (Bettencourt et al., 2002). Fourth, the delivery of such services involves a process of problem solving in which KIBS companies adapt their knowledge to the specific requirements of individual clients. Fifth, they often act as interfaces between global sources of knowledge and end users (Smedlund, 2006). Finally, their innovative capability is directly connected to the acquisition, processing, capitalization and delivery of new knowledge (Amara et al., 2009).Accordingly, KIBS companies often have to face two contrasting goals that recall the knowledge sharing - KP dilemma typically related to innovation activities (Olander et al., 2009): on the one hand, they need to be open to external relationships and to manage effective knowledge exchanges with suppliers and customers; on the other hand, they have to protect the development and capitalization of internal cognitive assets that constitute their core competitive resources. In particular, the latter is a crucial question since, in KIBS, innovations are mainly intangible and frequently co-produced with customers. This fact has two important consequences. First, imitating and copying innovative services may be very easy and rapid (Paallysaho and Kuusisto, 2008a). Second, the intellectual content of services may be difficult to protect by means of formal intellectual property (IP) methods (Amara et al., 2008). Moreover, the small size that generally denotes such companies exacerbates the knowledge sharing - KP dilemma and limits the possibilities of finding proper countermeasures (Olander et al., 2009).The literature that deals with the topic of KP generally focuses on the use of formal intellectual protection rights arrangements and especially on patenting activity carried on by large manufacturing firms (Amara et al., 2008; Bader, 2008; Xu and Tan, 2010). On the contrary, little attention has been devoted to services companies (Candelin-Palmqvist et al., 2012) and to KIBS (Paallysaho and Kuusisto, 2008b). Therefore, there is the need to shed light onto the ways KIBS firms protect their knowledge, and to understand the role that KP activities play in this sector. In particular, since knowledge is the core resource of those companies, and as such it should be the focus of management activities, there is the need to investigate how KIBS firms' adoption of KP mechanisms relates to their knowledge management approaches.In light of this, the paper addresses the following research questions:RQ1. How do KIBS firms protect their knowledge?RQ2. Is there a relationship between structural characteristics (in particular: size and sector) and the adoption of KP mechanisms?RQ3. Is there a relationship between KP mechanisms used by KIBS companies and their knowledge management strategies?In order to answer to these questions, the paper illustrates and discusses a survey of 471 KIBS companies located in the northeast of Italy (more precisely, the Veneto Region) and operating in three different industries, namely: ICT services (ICT), Design and Communication (D&C), Professional Services (PFS). This geographical context has some features that make its analysis interesting.First, the average size of Italian companies is among the smallest in the European Union: the size of 3.9 employees is significantly low, compared with the average of 6.4 employees of European Union's companies (Eurostat, 2009). In addition, they are mainly specialized on mature manufacturing sectors. This means that the need for business services and their capability to transfer innovations can be, at least in principle, very significant for the Italian economy (Antonietti and Cainelli, 2008). Furthermore, the Veneto Region - which is the focus of this study - well represents the above-mentioned situation (Corrocher and Cusumano, 2012; Schricke et al., 2012).On the other hand, as will be clarified later in the paper, the KIBS sector in Veneto has important differences compared to other places in Europe, both regarding the average size of companies, and their propensity to use IP mechanisms. Therefore, an analysis of the Veneto region complements the previous studies that focus on KIBS in other geographical areas, and provides the opportunity to compare and discuss the different results that are obtained.The paper articulates as follows. The following section gives a brief review of KP methods used by KIBS, and summarizes the main results of previous studies. The third section illustrates the research rationale, and the next one the method adopted for the empirical analysis. In the fifth section the main findings of the empirical investigation are pointed out and discussed. The last section proposes some conclusive remarks and highlights the limitations of the study.\n2. Mechanisms of KP in KIBS: It is first useful to explain the specific adoption of some terms in this paper. Intellectual capital is the combination of the intangible assets which enable a company to function (Kitching and Blackburn, 2003). Intellectual capital can be embedded in the firm's products, services, working routines or branding, and can be converted into usable resources to generate competitive advantage (Teece, 2000). Although several forms of intellectual capital have been singled out, it is recognized that knowledge is its grounding element (Stewart, 1997). IP can be defined as the set of legal rights which can be asserted in respect of intellectual capital and its products. IP is generally applied to the ownership of ideas (OECD, 2007), including literary and artistic works (protected by copyright), inventions (protected by patents), signs for distinguishing goods of an enterprise (protected by trademarks) and other elements. Only some forms of intellectual capital can be protected as IP, and not all the products or working practices in which it is embedded can be protected by law.IP protection and KP are two terms often used as interchangeable. In this paper the authors prefer to use KP to indicate the set of possible approaches, methods or tools employed to protect not only the tangible results of the exploitation of knowledge, but also knowledge itself. Accordingly, KP is a broader concept that includes IP rights protection and also \"softer\" protection methods that can be used to deal with the intangible nature of knowledge.Despite its importance, KP is a relatively new research area for services and, particularly, for KIBS. In the following paragraphs the authors will analyse and discuss the studies that have dealt with this topic so far.Paallysaho and Kuusisto (2008a, b, 2011) propose a survey and classification of KP methods adopted in the service industries and particularly by KIBS. The value of the paper is twofold. First, the suggested classification comes \"from the practice\", since it derives from case studies and a phone survey of 300 small-sized KIBS (average size around 18 employees) in different sectors and countries. Particularly important, the paper distinguishes between formal and informal protection methods, where the former are based on legal measures and the latter on relationship, trust and organizational arrangements. The surveyed KP methods are classified in three categories in accordance to their degree of formality (Figure 1). Second, the authors provide useful information about the employment of these methods by distinct categories of KIBS. In fact, the sample covers different service sectors (Software Services, Business and Management Consulting, Advertising) and different size classes. The main findings of their investigation are the following:\n3. Rationale and research questions: In light of the above-mentioned literature gaps and the research questions, this study pursues a twofold goal. First, it aims at verifying how structural characteristics of KIBS companies influence the adoption of KP methods, and in what direction. This makes it possible to compare the results of this investigation with the findings of the previous studies.Second, it aims to investigate the influence of knowledge management approaches on the adoption and use of KP methods by companies. This topic is rather overlooked in the literature, so its investigation can provide a better understanding of the use of KP methods by KIBS firms.As the reader can see in the following sections, structural and cognitive characteristics are examined in their influence on both the use of KP methods (the authors will refer to it as \"propensity\") and the variety of KP methods adopted, intended as the different types of KP mechanisms simultaneously used by firms.3.1 Structural characteristics\n4. Collection of empirical data: method and sample description: Data were drawn from a quantitative analysis on KIBS firms in the Veneto region (northeast of Italy), conducted in July 2009. The study was part of a broader research project aimed at analysing entrepreneurial dynamics, strategic models, organizations and forms of governance of KIBS belonging to three industries: ICT, D&C, PFS.The sample was based on information drew from two sources: the general list of companies registered at the Italian Chambers of Commerce, and the records of the Association of Professional Accountants (for the firms not registered in the previous list). The research group randomly extracted 2,984 firms out of the 7,049 KIBS firms operating in the areas of activity relevant to this research in the Veneto region in 2009.A questionnaire composed of 36 multiple-choice questions was designed by the research group and used by a specialized survey company that contacted the identified firms for the interviews. The survey company was trained on how to do the interviews by members of the research group, who spent a workday with the interviewers analysing the questionnaire in depth in order to avoid any possible misconception and misunderstanding on the meaning of the questions. In addition, the first 30 interviews were done by the interviewers with the assistance of one senior member of the research group, in order to manage any further unforeseen problem with the questions. The interviewee was the entrepreneur/owner or a top manager, in any case the most knowledgeable informant available. A single informant was chosen instead of multiple informants due to the very small size of the firms.Eventually, the firms interviewed were 512, with a response rate of about 17.2 per cent. The data collected were cleaned, eliminating mistakes and incomplete questionnaires from the database, obtaining a final sample of 471 observations, that is representative of the universe of KIBS firms in the region and accurately mirrors the actual structure of the industries. Afterwards, the data were coded and prepared for statistical elaborations with a standard statistical software package (SPSS 18).A general picture of the surveyed firms is given in Table I. As the table shows, the sample is almost uniformly subdivided over the three industries, and it is mainly formed by small and very small-sized enterprises: the average size of firms is 5.2 employees. There are some dimensional differences among industries, given that D&C companies are the smallest ones (average size 3.34 employees), while ICT are the largest (average size 7.06 employees).Since the average size is very small, the sample used in this study differs from the ones used in other empirical investigations already published in the literature (see the previous section), where surveyed companies are generally much larger.As regards the nature of the delivered services, two-thirds of the investigated companies offer customized services, but the situation is different from one industry to another (see Table II). While personalization prevails among D&C companies, things appear more balanced among ICT firms, with PFS firms lying in between.Companies mainly serve local markets: for the great majority of them (63.8 per cent) the most part of sales (75 per cent or more) comes from the regional market. Only 10.8 per cent of the companies export their services, and for a relatively small part of their revenues (25 per cent or less).\n5. Empirical findings: 5.1 Structural characteristics and KP\n6. Conclusion: This study aimed at investigating how KIBS companies protect their knowledge. It is a crucial issue that, until now, has received insufficient attention in the literature: particularly, empirical evidence about KP methods used by KIBS firms is still scarce, and this research can contribute to a deeper understanding of this topic. This is also necessary in order to develop useful indications both for managers of KIBS companies and policy makers interested in promoting and sustaining the development of the sector.6.1 Implications for research\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to examine the issue of knowledge protection in knowledge-intensive business services (KIBS). In particular, it addresses four research questions: do KIBS firms perceive the necessity to protect their knowledge? If so, do they protect it? What method (or combinations of methods) they typically use? What are the main factors affecting the approaches to knowledge protection adopted by these companies?\nDesign/methodology/approach: - The paper illustrates and discusses the findings of a survey of 471 KIBS companies located in the Northeast of Italy and operating in three different sectors: IT Services, Design and Communication, Professional Services.\nFindings: - The study classifies the types of knowledge protection mechanisms and investigates similarities and differences in knowledge protection between KIBS operating in distinct sectors, having different size, and pursuing different knowledge management strategies.\nPractical implications: - The outcomes of the investigation can have some important implications from a managerial viewpoint. In particular, the study provides food for though to both CEOs of KIBS companies regarding the mechanisms of knowledge protection, and to policy makers about the possible public policies that may help companies to protect their competitive knowledge capital without hindering their networks of interaction.\nOriginality/value: - The issue of knowledge protection is relatively new in the literature of KIBS, and findings are quite controversial. In particular, the state-of-the-art research shows that there are some important gaps that the paper tries to fill.\n",
"role": "assistant"
}
] |
Abusive supervision and citizenship behaviors: exploring boundary conditions
|
[
"Leadership",
"Compensation",
"Employees behaviour",
"Employee exchanges",
"Kazakhstan",
"Attitudes",
"Work psychology"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Undesirable management behaviors present significant problems for modern organizations, including harmful consequences for employees and their families, and bottom line performance problems for the organization overall. A survey released by Employment Law Alliance suggests that \"workplace bullying\" (i.e. Verbal abuse, use of threatening gestures, and the use of intimidation tactics) is quite common in the US, with managers and supervisors commonly the perpetrators of the abuse (Leonard, 2007). Approximately 45 percent of the respondents to that survey indicated that they have worked for an abusive boss at some point during their careers.\nLiterature review: Antecedents of organizational citizenship behaviors\nMethod: Sample and procedure\nResults: Descriptive statistics for all study variables, along with the correlations between these variables are reported in Table I, with internal-consistency reliability coefficients reported along the diagonal of this table. Initial review of the correlation coefficients and their respective signs suggests that OCBs and supervisor abuse are negatively related, and OCBs and pay satisfaction are positively related. In other words, subordinates appear less likely to offer OCBs when they report stronger levels of abusive supervision, and more likely to offer OCBs when they report stronger levels of pay satisfaction.Regression analysis results are reported in Table II. Although an analysis of correlations suggests that abusive supervision is negatively related to the subordinate's willingness to perform OCBs (b=-0.33, p<0.01), this relationship weakens in the presence of the other study variables, with statistically insignificant results (b=-0.37, p=ns), leaving this study unable to support H1.We find support for H2 and H3, as indicated by the statistical significance of the cross-product terms. The statistical significance of the cross product term in the exploration of satisfaction with pay (see Table II) lends support for H2 (b=0.05, p<0.05). The results of H3 testing indicates that dyadic duration moderates the relationship between abusive supervision and OCBs (b=-0.04, p<0.05). To further test the moderating hypotheses, simple slopes tests (Aiken and West, 1991) were performed. In regard to satisfaction with pay, simple slopes tests suggest that the relationship between abusive supervision and OCBs is negative and statistically significant at low levels of satisfaction with pay (b=-0.09, p<0.001) and statistically insignificant at high levels of satisfaction with pay (b=0.02, p=ns). The simple slopes test also supports the study findings regarding dyadic duration. When dyadic duration was low, the relationship between abusive supervision and OCBs was not statistically significant (b=0.003, p=ns). When dyadic duration was high, on the other hand, the relationship between abusive supervision and OCBs was negative and statistically significant (b=-0.08, p<0.01). The directions of the moderations are depicted in Figures 1 and 2.\nDiscussion: Strengths and contributions\n",
"role": "user"
},
{
"content": "Purpose: - Previous research indicates that employees reciprocate for abusive supervision by withholding discretionary organizational citizenship behaviors (OCBs). The purpose of this paper is to investigate the boundary conditions of the negative relationship between abusive supervision and OCBs, by investigating time and money (dyadic duration and pay satisfaction) as potential moderating variables to the abusive supervision-OCBs relationship.\nDesign/methodology/approach: - A sample of 357 bank employees in Kazakhstan was used to test hypotheses.\nFindings: - Results indicate that the negative relationship between abusive supervision and OCBs is more pronounced when employees have been supervised by a particular manager for a longer period of time, as well as when employees are less satisfied with their level of compensation.\nResearch limitations/implications: - Limitations include the use of cross-sectional data and the possibility of common method bias.\nPractical implications: - Satisfaction with pay as a moderator may suggest additional costs associated with abusive supervision, as employees may demand higher salaries when working for abusive supervisors. Additionally, dyadic duration as a moderator may suggest that abusive supervisor behaviors over time lead individual employees to withhold more and more OCBs.\nSocial implications: - Organizational cultures can be adversely affected by reactions to abuse, and abusive supervision represents a growing social problem that may necessitate legislation to protect workers.\nOriginality/value: - This paper contributes to the literature by suggesting that employees appear more willing to withhold OCBs in longer-term dyadic relationships, and employees' positive satisfaction with pay appears to lessen the negative relationship between abusive supervision and OCBs. Additionally, this study explores abusive supervision using a non-western sample.\n",
"role": "assistant"
}
] |
Back-testing extreme value and Levy value-at-risk models: Evidence from international futures markets
|
[
"Value-at-Risk",
"Back-testing",
"Extreme value",
"Generalized hyperbolic distributions",
"Lévy–Kintchine formula"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The tail-based risk measure value-at-risk (VaR) remains popular with practitioners and academics for its simplicity and as the financial institutions need to follow the guidelines of the Basel accord for assessing risk exposure. A significant effort has been extended to measure VaR and developing back-testing methodologies (Christoffersen, 2003, 2009a, 2009b; Christoffersen and Pelletier, 2004; Alexander, 2008, Kim et al., 2011). In recent years, more sophisticated VaR models have been developed by taking into account time-varying volatility, conditional skewness and kurtosis (Dark, 2010; Wilhelmsson, 2009; Sajjad et al., 2008; Bali and Theodossiou, 2008; Giot and Laurent, 2003). Grigoletto and Lisi (2011) evaluate the performance of VaR using GARCH with dynamic skewness and kurtosis and find significant role asymmetry in returns. So and Wong (2012) propose statistical methods for estimating multiple-period expected shortfall (ES)[1] and median shortfall under GARCH models and develop back-testing methods for assessing their performance for seven international markets. McNeil and Frey (2000) measure VaR and ES by fitting GARCH-models to return data using pseudo maximum likelihood and use a generalized Pareto distribution (GPD) approximation and find satisfying performance in back-testing and emphasize that such approximation is preferable to other models, as it can deal with asymmetries in the tails of data. Longin (1996), Jondeau and Rockinger (2003) and Gencay and Selcuk (2004) show that the distribution of the extreme returns can be characterized by the generalized extreme value (GEV) distribution and suggest that the probability of extremes should be taken into consideration when calculating VaR. Gettinby et al. (2004, 2006), and Tolikas and Gettinby (2009) show generalized logistic distribution best fit the empirical data compared to GEV and/or GPD for extreme daily share returns in developed countries. Cheng et al. (2015) model exceedances over a price threshold using a GPD to investigate the efficacy of margin, capital requirement and price limits for managing default risk in futures markets under VaR, ES and spectral risk measures. On the other hand, a number of studies examine hedging effectiveness based on minimum VaR or ES (Harris and Shen, 2006; Cao et al., 2010; Barbi and Romagnoli, 2014). Pownall and Koedijk (1999) and Bali (2003) present evidence that extreme-value VaR outperforms models which rely on the assumption of normal distribution of returns[2].\n2. Characterization in a Levy framework: The characteristic function of a stochastically continuous process starting at zero and with stationary independent increments can be written as:\n3. Initial data analysis: Our analysis is based on future contracts return data of S&P500, FTSE100, DAX, Hang Seng and Nikkei 225 indices for the period from January 1, 1991 to December 31, 2003 collected from DataStream. We choose the futures contracts at the beginning of each trading month that expire in the following trading months and rollover from one expiring contract to the next one. When dealing with bank holidays, DataStream considers padding the data set and takes the bank holidays end-of-day price to be the previous trading day's end-of-day price. We have 3,392 daily returns for each futures market. Our selection of data and sample period are intentional so that we can compare the results with Cotter and Dowd (2006, 2010) who uses same data and sample period for VaR and ES measures of risk.\n4. Estimation of value-at-risk: extreme value vs Levy: Apart from few specific cases, VaR is obtained as the solution of the quantile-integral equation:\n5. Goodness of fit tests: extreme value vs Levy: The Anderson Darling (AD) is particularly suitable among the GOF tests in assessing the performance of tail-based risk management models[6]. Anderson and Darling (1952, 1954) proposed a weighing rule in distance-based Kolmogorov-Smirnov test, which puts more emphasis on the tail observations. More recently, Anna et al. (2005) provide a formula for an AD-test statistic when observations are only available at the extreme tail and the distribution of the complete sample is unknown. It is referred to as the AD test for left-truncated data. This scenario applies precisely to an EV model. The AD-test for the complete distributions with closed form densities (as is the case with our Levy models) requires the p-values to be obtained analytically. Then p-values for the AD-test with left truncated data need to be obtained either by Monte Carlo simulation or bootstrapping. We carry out bootstrapping with 1,000 resampling to obtain the p-value for the EV model. The critical values are obtained from the VaR of the respective models. For the EV model, the VaR still works as a critical value for the left truncated AD-test as VaR itself is computed from a left-truncated density:\n6. Comparison of extreme value-value-at-risk and Levy-value-at-risk models using dynamic calibration: We compare EV and Levy estimates of VaR under EV and Levy model assumptions. The calibrated parameters of VaR based on GPD and the four chosen Levy models are presented in Table IV. As VaR is based on a high confidence level, it caters for high trading losses due to extreme events. For all indices, the VaR estimates are approximately of the same order of magnitude across the empirical values. However, it is difficult to ascertain whether any particular approach is better than the other. On some occasions, EV model provides the estimates of VaR, which are closer to their empirical counterparts. On other occasions, Levy-VaR estimates are closer to their empirical counterparts.\n7. Back-testing using dynamic calibration: As the daily VaR (a) is estimated on daily return, the loss for one day holding of the asset can only be violated for 100a% of the time allowing all possible extremities. We thus have an indicator variable describing the so-called hit sequence that identifies the day of VaR violation in the next T trading days. It is not, however, possible to identify the particular day or days when the VaR violation occurs (when the indicator variable is assumed '1'), but we are able to identify that the violation of VaR is restricted to 100a% of those T days. Thus, the hit sequence is a Bernoulli distribution with a probability a of assuming '1'. This provides the framework for testing the performance of any VaR model and is known in literature as back-testing. In most applications a is very low, usually 1 or 5 per cent (or very high usually 99 or 95 per cent if VaR is estimated on returns multiplied by -1) Christoffersen (2003). There are three useful hypotheses used in back-testing risk models. The unconditional hypothesis does not require any assumption regarding current violation status when it provides statistical evidence as to whether the observed fraction of violation of a particular VaR model tomorrow are significantly different from its promised fraction a. Rather, the evidence is provided by a test statistic following kh2 distribution asymptotically with one degree of freedom:\n8. Discussion: Our back-testing of Levy-VaR models has provided significant insight regarding the usefulness of tail modeling and full-density modeling for financial risk management. Nevertheless, back-testing EV and Levy-VaR models using dynamic calibrations has not allowed us to assert any preference between the EV and Levy approaches. Given the observed PV performance on back-testing, the EV-VAR is not distinguishable from that of the full density-based Levy models. Although the PVs corresponding to an EV model are closer to the promised fraction of violation (the complement of the coverage levels) for the S&P500, FTSE100 and DAX indexes, the outcome is reversed for other indexes (Hang Seng and Nikkei225). The PVs corresponding to EV models for Hang Seng and Nikkei225 deviate more from the promised fraction of violations compared to those for Levy models. In spite of the fact that the Levy densities are calibrated on the entire data set and the EV models are calibrated using only the extreme observations, we conclude that tail targeting and full density approaches do not materially differentiate between PV estimates and that the tail based risk measure of VaR obtained for the tail based model of EV and full density-based Levy models are similar.\n9. Conclusion: We explore both tail targeting EV model and full density-based Levy models for estimating and back-testing the tail-based risk measure of VaR. The back-testing results reveal that neither Levy nor EV approach is comprehensively superior to one another. This is confirmed by the estimates of the VaR obtained under both approaches for the major indexes. Thus, the assumption that a tail-targeting model is more likely to provide superior consistent estimates for VaR is empirically disputed. Our finds confirm that the superiority of a particular approach for a particular time frame is random. Thus, we cannot ascertain that a particular full density (Levy) model is superior in all ranges of the data. Given such similarity of estimation and back-testing performances under both approaches, the choice is likely to be driven by a compromise between simplicity and adequacy. In other words, the practitioners should be cautious about giving too much emphasis on simplicity of EV measures discarding the role of Levy-based measures or vice versa. Our results, to some extent, reflect the findings of Cotter and Dowd (2006) that different precision metrics suggest different levels of precision of VaR and ES estimators under EV measure[8].\n",
"role": "user"
},
{
"content": "Purpose: The purpose of the paper is to back-test value-at-risk (VaR) models for conditional distributions belonging to a Generalized Hyperbolic (GH) family of Levy processes - Variance Gamma, Normal Inverse Gaussian, Hyperbolic distribution and GH - and compare their risk-management features with a traditional unconditional extreme value (EV) approach using data from future contracts return data of S&P500, FTSE100, DAX, HangSeng and Nikkei 225 indices.\nDesign/methodology/approach: The authors apply tail-based and Levy-based calibration to estimate the parameters of the models as part of the initial data analysis. While the authors utilize the peaks-over-threshold approach for generalized Pareto distribution, the conditional maximum likelihood method is followed in case of Levy models. As the Levy models do not have closed form expressions for VaR, the authors follow a bootstrap method to determine the VaR and the confidence intervals. Finally, for back-testing, they use both static calibration (on the entire data) and dynamic calibration (on a four-year rolling window) to test the unconditional, independence and conditional coverage hypotheses implemented with 95 and 99 per cent VaRs.\nFindings: Both EV and Levy models provide the authors with a conservative proportion of violation for VaR forecasts. A model targeting tail or fitting the entire distribution has little effect on either VaR calculation or a VaR model's back-testing performance.\nOriginality/value: To the best of the authors' knowledge, this is the first study to explore the back-testing performance of Levy-based VaR models. The authors conduct various calibration and bootstrap techniques to test the unconditional, independence and conditional coverage hypotheses for the VaRs.\n",
"role": "assistant"
}
] |
Revisiting public health informatics: patient privacy concerns
|
[
"Public health",
"Knowledge management",
"Information technology",
"Risk management",
"Health law or regulation",
"Informatics"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nThe case for a new data sharing policy framework: Electronic patient record (EPR) systems store detailed patient information and serve to communicate a patient's longitudinal clinical data across multiple clinical care sites (Hayrinen et al., 2008). They have become increasingly linked through data portals in wide networks so that authorized entities can add laboratory and other diagnostic test results; automate required public health case reporting; facilitate insurance billing and reimbursement; permit patients to view their own records, make appointments and refill prescriptions, etc. Additionally, \"de-identified\" data sets (containing pertinent case details of individuals without revealing patient identity) and public information summaries (giving only aggregated overviews) have research application. It is not surprising that many different professions and perspectives are involved in creating, maintaining and improving the ability to share such data in a responsible way.\nThe case for faster progress: Three years later, in 2017, little progress seems to have been made to update privacy legislation in British Columbia (FIPPA & PIPA)[1], and in Canada. The Federal Privacy Act, which covers the personal information-handling practices of federal government departments and agencies, has had minimal substantive updates since it was first introduced in 1983 (www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-privacy-act/pa_r/pa_ref_rec_161101/). Since 2001, PIPEDA[2], the federal private-sector privacy law, has only been updated in minor ways and now needs to be modernized (www.priv.gc.ca/en/opc-actions-and-decisions/advice-to-parliament/2017/parl_20170216/; www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-privacy-act/pa_r/). Similarly in Washington State, no new data sharing legislation has been introduced during this time.\nWhat framework might help guide the path forward?: The OECD Privacy Framework with its ten principles has, since 1980, been foundational in the global development of privacy legislation, regulations, policy and practice. Recognizing the significant changes to the collection, use and movement of personal information over the past 30 years, the Organization for Economic Co-operation and Development revised the Framework in 2013 to reflect the role of technological advancements in the frequency and complexity of interactions involving personal data. Regulators and privacy professionals faced challenges in applying the traditional privacy principles in an environment that was more globally connected than ever before, for example:\n",
"role": "user"
},
{
"content": "Purpose: Expanding networks of data portals and repositories linked to electronic patient record systems, along with advances in information technology, have created both new opportunities in improving public health and new challenges in protecting patient privacy. The purpose of this paper is to review stakeholder perspectives and provide a framework for promoting implementation of current privacy protection improvement recommendations.\nDesign/methodology/approach: This paper summarizes a workshop session discussion stemming from the 2017 Information Technology and Communication in Health (ITCH) biennial international conference in Victoria, British Columbia, Canada. The perspectives within health service research, journalism, informatics and privacy protection were represented.\nFindings: Problems underlying gaps in privacy protection in the USA and Canada, along with then-current changes recommended by public health leaders as well as Information and Privacy Commissioners, were identified in a session of the 2015 ITCH conference. During the 2017 conference, a workshop outlined the current situation, identifying ongoing challenges and a lack of significant progress. This paper summarizes that 2017 discussion identifying political climate as the major impediment to progress on this issue. It concludes with a framework to guide the path forward.\nOriginality/value: This paper provides an international perspective to problems, resources and solution pathways with links useful to readers in all countries.\n",
"role": "assistant"
}
] |
Technology enhanced learning and plagiarism in entrepreneurship education
|
[
"Entrepreneurship",
"Education",
"Dishonesty",
"Plagiarism",
"Students",
"Ethics",
"Business environment"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In the context of widespread use of the internet in entrepreneurship education, there is an ever-increasing emphasis on the need to develop the authorship and literacy skills of students. Existing literature on plagiarism calls for educational interventions that go beyond definitions of plagiarism in an academic handbook and provide students with contextualised examples, case scenarios and opportunities for practising their referencing and paraphrasing skills (Barry, 2006; Smith et al., 2008; O'Dwyer et al., 2010).Technology enhanced learning offers great potential for this, and many universities offer opportunities for students to avail of online courses to develop such skills. Obviously, the significant advantage of such alternatives is that institutions can save money and reach more people by using online modules and research programmes. Thousands of staff and students, including those studying/working part-time or at a distance, can simultaneously access online courses, a reality which proves far more efficient than the \"traditional\" and more costly approach involving the running of face-to-face workshops (Risquez and Ohaodha, 2010). However the effectiveness of such courses is assumed rather than demonstrated, as the rationale for their adoption can be driven by political rather than educational concerns. Very often these types of educational resources are offered outside the curriculum (from Library services, writing centres, etc.), relying on students taking them on a voluntary basis in their spare time.This paper investigates the relationship between plagiarism prevention online tutorials and entrepreneurship students' ethical views of plagiarism and their self reported engagement in plagiarism. It also analyses the relationship between an online plagiarism prevention tutorial and applied measures of plagiarism which assess students' ability to recognise and avoid instances of plagiarism. The findings contribute to entrepreneurship education literature by demonstrating that educators should focus on good educational design instead of considering e-learning choices as a quick alternative - thus emphasising quality rather than quantity.\nLiterature: A study of attitudes towards cheating by Roig and Ballew (1994) noted that business students were more tolerant of cheating than other student groups. This is particularly significant when considered in conjunction with (Allmon et al., 2000) study of business students' ethical behaviours which found that perceptions of ethical behaviour in the classroom are considered surrogates for future business behaviours.The predominance of electronic media in an information society has given rise to increasing incidence of plagiarism (Scanlon and Neumann, 2002) where \"the explosion of electronic sources of information has not just made copying easier, it has also made it much more central to our students cultural and social experiences\" (Wheeler and Anderson, 2010, p. 167). The majority of third level students have grown up with the internet. For such students the internet is central to the way they communicate, they collect information using multiple sources, jumping between multiplicity of reference points including social networking sites, entertainment sites and academic sites (Wheeler and Anderson, 2010). In many cases the information they pick up on each site is cut and pasted together to form the foundation of their intellectual development. This forms the core of an educational dilemma whereby students circumvent \"the process of analysing and synthesising ideas, and reformulating them in writing,\" by cutting and pasting, thus evading crucial aspects of learning (Wheeler and Anderson, 2010, p. 168).In a study of 2,600 third level students McCabe (2009) found that just over 40 per cent of respondents believed that cutting and pasting from the internet is either trivial plagiaristic behaviour or not plagiaristic at all. This raises the issue of student understanding of what plagiarism is, their recognition of it and their avoidance of it. However, exploring plagiaristic behaviour is a complex process given that self reported measures are commonly used in plagiarism studies. Such studies are based on assumptions that students understand plagiarism, will declare their engagement in plagiarism, expect to be punished for plagiaristic behaviour and that they have strong ethical views, all of which combine to affect student behaviour (Diekhoff et al., 1999, Newstead et al., 1996, Hendershott et al., 1999, Chapman and Lupton, 2004, Rawwas and Al-Khatib, 2004, McCabe et al., 2008).Brown and Howell (2001) explored plagiarist behaviour and its avoidance using scenarios of verbatim and paraphrased plagiarism, where the authors found that providing guidance about plagiarism which included descriptions of how to avoid it, helped students to identify the examples as breaches of academic guidelines. Recently, emerging research has explored the use of educational technology to observe incidences of plagiarism and the impact of preventive measures from a behavioural perspective (Roig, 1997, Ledwith and Risquez, 2008, Martin, 2005, Schuetze, 2004, Keck, 2006, Landau et al., 2002). In a study by Belter and Du Pre (2009) the efficacy of an online academic integrity module was assessed and found that in the pre-intervention group 25.8 per cent of the papers had evidence of plagiarism as opposed to 6.5 per cent in the group who had completed the module. Furthermore the study noted that the majority of plagiarism was a result of inadequate knowledge regarding citation. This finding contrasts with a study by Ellery (2008) who found that following explicit tutorial instruction on plagiarism 25 per cent of students still plagiarised, the majority from the internet. This study builds on these studies and contributes to the body of literature by exploring online educational plagiarism interventions and their impact on students' approaches to contextualised examples of plagiarism, case scenarios and opportunities for practicing their referencing and paraphrasing skills.\nResearch methods: Context\nResults: The empirical results are presented under two sub-headings:1. Random assignment: experimental versus control.2. Non-random assignment: experimental versus control.Random assignment: experimental versus control\nDiscussion: The study found that, when assigned randomly to the experimental condition (rather than relying on students to participate voluntarily), completion of the online tutorial is not related to the students' self-reported views on plagiarism. However, those who were required to complete the online tutorial tended to perform better at identifying a case of verbatim plagiarism as an academic breach, on their own behalf and also felt sure that their lecturers would also view it as plagiarism. This may be explained by the fact that self-reported views may be measuring a motivational dimension while the case study may call a more practical and applied dimension that may be more easily influenced by an educational intervention. While being careful not to infer causality from a comparison of means, the research demonstrates some indication of the possibility that the online tutorial did not influence what students think of plagiarism, but it perhaps influence their ability to recognise it in real practice as a breach of academic guidelines.This study did not find any association between the completion of the tutorial and self-reported views on plagiarism for the randomly assigned groups (VT and VNT), however, the case is different for those that had the option to complete the tutorial or not (CT and CNT). Those who voluntarily completed the tutorial scored significantly lower in two of the measures of engagement and two of the measures of ethical views on plagiaristic behaviour. It is very probable that the self-selection in completing the tutorial accounts for these differences more so than the effect of the tutorial itself. Not surprisingly, the study also illustrates that those students who chose to voluntarily complete the tutorial in their own time outside of class where also able to better identify a case of verbatim plagiarism as a breach of academic guidelines than those who chose not partake in the tutorial at all, and thought their lecturers would also consider it as a breach. In summary, the findings for the non-random assignment groups point to the likely influence of the self-selection on the views and perceptions of students rather than the impact of the online plagiarism tutorial per se.In the context of the hypotheses proposed, the conclusions must be drawn mainly from the first set of comparisons (Random assignment: experimental versus control), given the impact of self-selection in the non-random assignment contrasts. The empirical findings do not support the first two hypotheses (\"Students who complete an online tutorial on plagiarism will report lower engagement in plagiarism\", and \"Students who complete an online tutorial on plagiarism will hold stronger ethical views with regards to plagiarism\"). However, support was found for H3 (\"Students who complete an online tutorial on plagiarism will be better able to recognise and avoid a practical scenario of verbatim and paraphrased plagiarism\").\nConclusion: This study investigated the relationship between entrepreneurship students' ethical views on plagiarism, their self-reported engagement in plagiarism and their participation in plagiarism prevention online tutorials. The results of the study suggest that:* More than one online plagiarism prevention tutorial may be required to change self-reported views relating to engagement in plagiarism, perception of peer participation in plagiarism and students' ethical views.* However, even such a small intervention demonstrates an observable difference in students' capacity to recognise a case of verbatim plagiarism as an academic breach in practice, thereby supporting the efficiency of such interventions in practical terms.* The results also show the importance of the integration of these interventions in the curriculum for all students. Paradoxically those students who may have the greatest need for such tutorials may be most reluctant to participate, and therefore, relying on voluntary participation may be counterproductive.Although the study utilised reliable and valid research measures supported in previous literature (Brown and Howell, 2001; Scanlon and Neumann, 2002) it is acknowledged that the current debate is open with regard to the extent self-reported views on plagiarism are a reliable measure of reality (O'Dwyer et al., 2010). Future research in this area should explore the use of more applied measures.The implications of this research focus on the need for the integration of a series of online plagiarism prevention interventions into the core of the relevant curricula. This would expose students to the correct citation procedures for both verbatim and paraphrased material. The findings contribute to entrepreneurship education literature by demonstrating that educators should focus on good educational design instead of considering e-learning choices as a quick alternative - thus emphasising quality rather than quantity.\n",
"role": "user"
},
{
"content": "Purpose: - This paper seeks to explore the relationship between entrepreneurship students' ethical views on plagiarism, their self reported engagement in plagiarism and their participation in an online plagiarism prevention tutorial.\nDesign/methodology/approach: - The study is based on a questionnaire administered to 434 undergraduate university entrepreneurship students, combining self-reported data with behavioural measures.\nFindings: - The results illustrate that more than one online plagiarism prevention tutorial is required to change self-reported views relating to engagement in plagiarism, perception of peer participation in plagiarism and students' ethical views. However, it should be noted that even such a small intervention demonstrates an observable difference in students' capacity to recognise a case of verbatim plagiarism as an academic breach in practice.\nResearch limitations/implications: - The research demonstrates that educators should focus on good educational design, educating students regarding plagiarism prevention while making use of technology enhanced learning, instead of considering e-learning choices as a quick solution to plagiarism.\nOriginality/value: - The paper focuses on an emerging aspect of plagiarism education, that is, the use of technology enhanced learning. While acknowledging the potential of technology enhanced learning in plagiarism prevention the paper notes that plagiarism prevention should be embedded in the curriculum rather than addressed in an ad-hoc manner.\n",
"role": "assistant"
}
] |
An introduction to pensionomics
|
[
"Econographicology",
"Multi-dimensional geometry",
"Multi-dimensional graphs",
"Pension systems",
"E65"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The concept of retirement has evolved constantly, transforming societies and shaping both income and non-income dimensions of well-being. Pension entitlement turned gradually from political discourse to a human right discourse. Pension schemes have extended the scope of insurance coverage beyond labour markets and the lifecycle, supporting the broader needs of the entire population. Furthermore, pension schemes are widely acknowledged as drivers of economic growth: they enhance labour productivity; foster smooth consumption; and create a stable economic environment for investment and innovation. Current expectations require pension schemes to adopt proactive and reactive policies to examine options for mitigation or modification of potential consequences in anticipation of exceptional events.\n2. From the pension scheme to social security to welfare state: There is no country in the world without any form of a pension scheme. Some pension schemes are still in their infancy, others are well-established, but in the end, all are conditioned to economic, demographic and cultural characteristics of each country. In developed countries, pension schemes initiated to accomplish three objectives: fight poverty; provide social assistance and promote social cohesion. Despite certain commonalities, pension schemes exhibit substantial differences among developed countries regarding the objectives, scope, coverage, benefits and role of the public and private sectors in the welfare landscape.\n3. Definition and classification of pensionomics: Pensionomics places the concept of retirement in a multi-disciplinary context. Pensionomics overcomes theoretical and empirical limitations encountered by the path-dependency perspective, developing a new research agenda to study pension schemes under the historical, cultural, social, economic, political and environmental prism. Integrating diversified data, techniques, perspectives and concepts, pensionomics' objective is to connect natural and man-made events with social protection mechanisms for the development of a dynamic social protection framework, where individual, community and society needs are met effectively and efficiently by implementing tailored policies, closely related to their specific context.\n4. Pensions Consistency Index: The PC-Index is a statistical measure of changes in pension performance. The PC-Index evaluates the consistency level of a pension system by pinpointing their deficiencies and mapping their patterns. The PC-Index involves 10 main variables, of which consist 50 sub-variables. The PC-Index implementation involves the following steps:\n5. Concluding remarks: The consideration of pensionomics concept as an evaluation tool for pension schemes provides insights that are helpful to explain performance differentials. Taking definition, classification and evaluation as a guiding principle, the new conceptual framework can be a useful point of reference for the overall evaluation of pension schemes revealing deficiencies that traditional evaluation methods cannot detect. The multi-disciplinary approach focusses on building inter-sectoral and holistic policies able to respond to the multi-dimensional uncertainties of the new dynamic environment.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to explore the concept of pensionomics as a prospective tool for pension evaluation. This paper suggests a paradigm shift - a multi-disciplinary synthesis of differing perspectives in evaluating pension's overall performance based on past work on pension evaluation - incorporating non-economic variables with significant impact on economic growth and social development.\nDesign/methodology/approach: This paper suggests a new analytical tool called \"Pensions Consistency (PC) Index\" that identifies the level of consistency and the strengths and weaknesses within any pension system. The new conceptual framework focusses on building inter-sectoral and holistic policies able to respond to the new multi-dimensional dynamic environment.\nFindings: The consideration of pensionomics concept as an evaluation tool for pension schemes provides insights that are helpful in explaining performance differentials. Taking definition, classification and evaluation as a guiding principle, the new conceptual framework can be a useful point of reference for the overall evaluation of pension schemes, revealing deficiencies that traditional evaluation methods cannot detect. The multi-disciplinary approach focusses on building inter-sectoral and holistic policies that are able to respond to the multi-dimensional uncertainties of the new dynamic environment.\nResearch limitations/implications: The heterogeneity and complexity in event dynamics are systemic in the sense that the impact is far from linear. The idiosyncratic nature of unexpected and unpredictable events is rather a result of multi-dimensionality based, among others, on magnitude, frequency, timing, intensity and impact. It is plausible to argue that crisis episodes can destabilize critical systems of economic activity, producing economic spillovers that can directly or indirectly affect the sustainability of pension schemes. If the calculation of direct economic impact is readily traceable, the estimation of indirect economic impact can be an onerous task.\nPractical implications: Pensionomics places the concept of retirement in a multi-disciplinary context. Pensionomics overcomes theoretical and empirical limitations encountered by the path-dependency perspective, developing a new research agenda to study pension schemes under historical, cultural, social, political, economic, political and environmental prism. Integrating diversified data, techniques, perspectives and concepts, pensionomics' objective is to connect natural and man-made events with social protection mechanisms for the development of a dynamic social protection framework where individual, community and society needs are met effectively and efficiently by implementing tailored policies, closely related to their specific context.\nSocial implications: The concept of retirement has evolved constantly, transforming societies and shaping both income and non-income dimensions of well-being. Pension entitlement turned gradually from a political discourse to a human right discourse. Pension schemes have extended the scope of insurance coverage beyond labour markets and the lifecycle, supporting the broader needs of entire population. Furthermore, pension schemes are widely acknowledged as drivers of economic growth: they enhance labour productivity; foster smooth consumption; and create a stable economic environment for investment and innovation. Current expectations require pension schemes to adopt proactive and reactive policies to examine options for mitigation or for modification of potential consequences in anticipation of exceptional events.\nOriginality/value: This paper suggests a paradigm shift, a multi-disciplinary approach called pensionomics, and this \"multi-disciplinary\" focus builds a new analytical framework to evaluate pension's overall performance based on past work on pension evaluation, incorporating non-economic variables with significant impact on economic growth and social development. PC-Index introduces a comprehensive evaluation tool to study the coverage, performance, efficiency, effectiveness, current trends and future possibilities of pension schemes.\n",
"role": "assistant"
}
] |
Hitting the sustainability sweet spot: having it all
|
[
"Research",
"Sustainable development",
"Performance management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Among the most important issues facing business today is sustainability. Sustainability is concerned with meeting the needs of the present without compromising the ability of future generations to meet their own needs. A sustainable enterprise is one that considers its impacts on the environment and on society in general, while maintaining financial profitability. Sustainable enterprises measure success in terms of a triple bottom line (Savitz and Weber, 2006) comprised of:1. the environmental bottom line;2. the societal bottom line; and3. the financial bottom line.Sustainable enterprises search for a \"sweet spot\" where harmful social and environmental impacts are minimized, and an adequate rate of return is realized. This philosophy is illustrated in Figure 1.\nThe environmental bottom-line: For decades, the concern about our natural resources being depleted by human activities and economic expansion has grown deeper and deeper. More and more environmental problems have been recognized, including air and water pollution, ozone depletion, global warming, toxic wastes in groundwater and worldwide destruction of ocean fisheries (e.g. World Resources Institute, 2004). With the growing awareness of the depletion and/or degradation of natural resources, the environmental sustainability movement has awakened nations, businesses, and individuals to the necessity of protecting the environment for the benefit of generations to come.Some consumer and technology-oriented businesses have been proactive in this area because they have financial reasons to commit to environmental sustainability. A growing number of consumers want energy-efficient products and avoid companies that have a reputation for environmentally harmful practices or products. Findings from the 2007 ImagePower(r) Green Brands Survey (Landor Associates, 2007) indicated that despite differing sentiments on politics and religion, 90 percent of Americans agree that there are important green issues and problems, and 82 percent believe it is important for companies to implement environmentally-friendly practices.In 2005, Lee Scott, Wal-Mart's CEO, set several ambitious goals: increase the efficiency of its vehicle fleet by 25 percent over the next three years, eliminate 30 percent of the energy used in stores, and reduce solid waste from US stores by 25 percent in three years. To accomplish these goals, Wal-Mart committed to invest $500 million in sustainability projects. As Scott said:To me there can't be anything good about putting all these chemicals in the air. There can't be anything good about the smog you see in cities. There can't be anything good about putting chemicals in these rivers in Third World countries so that somebody can buy an item for less money in a developed country. Those things are just inherently wrong, whether you are an environmentalist or not (Gunther, 2006).Some businesses have committed to reducing the waste that they generate. These companies subscribe to the traditional environmental mantra of \"reduce, reuse, and recycle.\" The financial benefit is that reducing waste actually means reducing inputs that eventually are disposed of. Toyota has eliminated almost all of the waste from its US headquarters near Los Angeles. Wal-Mart has begun to measure how much packaging its suppliers use. Wal-Mart's goal is to reduce packaging by 5 percent, which it estimates will save the company and its suppliers about $11 billion (Gunther, 2007).Other companies that are recognized as being environmentally responsible include: S.C. Johnson which has developed and adopted the Greenlist process that enables the company to identify and eliminate raw materials that harm human and environmental health; Hewlett-Packard which operate massive e-waste recycling plants and has developed 100 percent recyclable products; and Continental Airlines which has spent over $16 billion to replace its fleet with more fuel-efficient and lower-emissions aircraft (Fortune, 2007).However, many businesses have responded reluctantly to this challenge by moving through the stages of disbelief, denial, anger, bargaining, reflection, testing, and finally acceptance and hope. This is particularly true in areas such as responsiveness to global warming. Take Exxon for example. Exxon Mobil has a lot at stake in the global warming debate. According to a report by the Union of Concerned Scientists, the carbon dioxide emitted by the end users of all of Exxon Mobil's products is greater than the emissions of all but five countries.Exxon has been the prime example of denial among those convinced of global warming. It opposed the Kyoto climate change treaty. In 2001, it pressed the Bush administration to remove an outspoken scientist from the Intergovernmental Panel on Climate Change. In 2005, a White House official who had been accused of altering scientific reports to cast more doubt on global warming went to work for Exxon.However, in 2006 Exxon issued a report titled \"Tomorrow's energy\" which said that \"the accumulation of greenhouse gases in the Earth's atmosphere poses risks that may prove significant for society and ecosystems.\" Since then, Exxon has donated $100 million to a climate change project at Stanford University. Kenneth Cohen, Exxon's vice president for public affairs, recently said \"the risks to society and ecosystems could prove to be significant, so despite the areas of uncertainty that do exist, it is prudent to develop and implement strategies that address the risks\" (Mufson, 2007).The objective of environmental sustainability may seem to be at odds with the objective of financial sustainability. At the inception of an environmental sustainability program, the firm may incur start-up investments or see its operating costs increase, making the sustainability initiative less attractive. Even when the sustainability program passes the start-up stage, the company will still face the challenge of establishing objective measures to assess and monitor progress towards its goals. Furthermore, being an environmentally sustainable enterprise is much more than just implementing sustainability principles; it is a new way of thinking, a significant corporate culture change, a change of the system of values and beliefs among the company, from top management to all employees, so that sustainability mind-set can be translated to commitment and voluntary behavior. This change cannot be done in days or weeks. Instead, it may take a long period of time with lots of efforts, commitment, and insights.\nThe societal bottom line: The societal bottom line or corporate social responsibility (CSR) is concerned with corporate decisions that affect society or, more specifically, the firm's stakeholders including employees, suppliers, and the communities in which the firm operates (Smith, 2003). The CSR philosophy holds that companies, while doing business, proactively engage their stakeholders and incorporate solutions to their concerns in the company's business practices. This philosophy has even been proposed as an alternative to shareholder value-based philosophies of management (Freeman, 1994). Principles of CSR may include commitment to diversity, fair compensation, a safe and healthy workplace, fair dealings with suppliers, selection of suppliers that adhere to the principles of CSR, and engagement with governments and communities in which the business operates to improve quality-of-life.Industry leaders such as Procter & Gamble, PepsiCo, Microsoft, and Starbucks have made diversity a high priority from the executive suite to front-line marketing and operations positions. Why do they believe that diversity is important? At Procter & Gamble, the point at which a consumer selects a P&G product instead of one of its competitors' is known as the \"first moment of truth.\" Winning the \"first moment of truth\" is most likely to occur when marketing and sales teams have a perspective that is similar to that of prospective customers (Schueffel and Istria, 2005). Indra Nooyi, the current CEO of PepsiCo, argued (Walkup, 2003, p. 97):Great ideas still come from people ... The challenge is to create the right environment to encourage innovation and ideas. The diversity of people in a corporation promotes innovation because it achieves greater diversity of ideas. There is a link between diversity and innovation that's not theoretical - it's real.And, while Microsoft is not often thought of as a warm and fuzzy place to work, besides offering highly competitive salaries the company offers very attractive benefits including 100 percent health care coverage, domestic partner benefits, subsidized gym memberships, a job sharing program, opportunities for telecommuting, and ongoing professional training. This enables Microsoft to attract and retain the high quality knowledge workers who have made it so successful.Many successful businesses, such as Toyota and Honda, have turned often adversarial buyer-supplier relationships into close partnerships. As a result, suppliers develop and provide ideas that can speed a product from concept to market or reduce costs. To realize the benefits of a close relationship, buyers conduct joint improvement activities with suppliers, share information, develop suppliers' technical capabilities, and work to understand how suppliers manage their business. Just as important, smart buyers do not play suppliers off against each other in order to get the lowest price possible (Liker and Choi, 2004).In 1995, Alcan, a Canadian aluminum company, attempted to divert a river to create hydroelectric power for a giant smelter. However, the local indigenous population objected strongly. To continue its water diversion project, Alcan needed direct community buy-in. According to Dan Gagnier, senior vice president for corporate and external affairs:We won the environmental hearings and made the changes required us to get permits ... but the political environment had changed, so we actually lost.In the end, Alcan walked away from the project leaving over $500 million in stranded costs (Esty and Winston, 2006, p. 89). This experience vividly illustrates the importance of community engagement.Besides involving the community in major business decisions, many companies engage in corporate philanthropy. As Kenneth Lewis, CEO of Bank of America, writing on the company's website, said:We believe, very simply, it is the actions of individuals working together that build strong communities ... and that business has an obligation to support those actions in the communities it serves.In 2007 Bank of America donated over $200 million through grants that address important social, economic, and cultural needs of communities that they serve.The Committee Encouraging Corporate Philanthropy (CECP) is an international forum of business CEOs who focus on corporate philanthropy. CECP believes that when companies demonstrate programmatic effectiveness, fiscal accountability, and good stewardship in their philanthropic programs, society and business both stand to benefit. Members of the CECP include Adobe Systems, Allstate Insurance, Best Buy, Campbell Soup Company, The Dow Chemical Company, Eli Lilly & Company, General Electric, IBM, Johnson & Johnson, Merck, Nokia, Toys \"R\" Us, and Xerox, among others. Of course, these companies are not merely altruistic. Benefits that they expect to realize include: ability to attract and retain talented employees who want to work for a company that stands for something more than just the bottom line, opportunity to expose their products to more customers and markets, and the creation of positive customer attitudes.However, since a relationship between social responsibility and financial performance has not been demonstrated, some companies may be reluctant to make a strong commitment to CSR. Particularly in an economy such as the one that we face today, CSR requires a financial commitment with no effective way to assess whether it has a positive impact on the bottom line. In the face of tough economic conditions, many business leaders will adopt Milton Friedman's (1962, p. 133) position that \"there is one and only one social responsibility of business - to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition, without deception or fraud.\" From Friedman's perspective shareholders are who should engage in philanthropic efforts based on their own personal social and ethical values.\nThe financial bottom line: The ultimate criterion of business success is the creation of shareholder value. Shareholder value is created when a business earns a return on invested capital that exceeds its cost of capital. Generally speaking, the greater the spread between return on capital and cost of capital, the greater the shareholder value that is created. However, the spread may be magnified by growth. In other words, a high-growth business that has a low to moderate spread may create more shareholder value than a low growth business that has a higher spread. Thus, executives and shareholders focus both on profitability and growth.\nIs there a sustainability sweet spot?: To answer this question, we selected the 31 NYSE traded corporations that appeared at least twice during the 2005-2008 time period on the \"Global 100 Most Sustainable Corporations in the World\" list that is compiled by Innovest Strategic Value Advisors, Inc. Innovest selects these firms based on social and environmental commitment, and on responsible strategic governance. We compared their three-year average return on assets, revenue growth rate, and share value appreciation to that of a carefully selected matched sample of competitors. For example, we compared Agilent Technologies to Teradyne as they compete in the measurement equipment business; Hewlett-Packard to IBM as they compete in the computer hardware, software, and consulting business; and Walt Disney to Time Warner as they compete in the media/entertainment business.We found that, on average, the companies drawn from the \"Global 100 Most Sustainable Corporations\" list outperformed their competitors. Specifically, the results found are shown in Table I.However, we also found that only about two-thirds of the sustainable corporations outperformed their competitors. This means that being environmentally and socially responsible does not guarantee superior performance. Why is it that some companies are able to find the \"sweet spot\" and others are not? This is the question to which we now turn our attention.\nWhat does it take to hit the sweet spot?: Inspirational, visionary leadership\n",
"role": "user"
},
{
"content": "Purpose: - Corporations and their leaders are coming under increasing pressure to achieve \"sustainability.\" Sustainable enterprises minimize harmful environmental impacts, are socially responsible, and create shareholder value.\nDesign/methodology/approach: - Using secondary data, we compared the financial performance of recognized sustainability leaders to that of a carefully selected matched set of competitors.\nFindings: - The authors find that, on average, sustainability leaders outperform their competitors. However, this is true and only two out of three cases.\nResearch limitations/implications: - While the authors utilized a relatively small sample with data that covers only a three-year period, the use of secondary data from two different databases reduces concerns about common method bias. The clear implication of this analysis is that a commitment to sustainability does not harm financial performance and may, in fact, enhance performance.\nPractical implications: - The authors conclude with a set of recommendations for how top management teams can hit the sustainability sweet spot.\nOriginality/value: - The concept of the sustainable enterprise is relatively new and little rigorous research has been conducted on its performance implications. The authors believe that the literature review, the research undertaken, and the practical implications will be of value to both scholars and to executive teams.\n",
"role": "assistant"
}
] |
Importance of innovation and flexibility in configuring supply network sustainability
|
[
"Innovation",
"Flexibility",
"Institutional theory",
"Collaborative relationships",
"Resource-based view",
"Sustainability in supply networks"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Manufacturing firms playing a key role in the development of many emerging economics are, however, a subject of debate due to direct negative impact on climate changes. Research on sustainability has begun to emerge to address such environmental and social issues (Vithessonthi, 2009). Researchers have started putting special emphasis toward configuring upstream supply networks (Kim et al., 2015; Akhavan and Beckmann, 2017). The reason for such thrust is due to multiple supply management challenges which amplifies risks (Camuffo et al., 2006; Choi and Krause, 2006; Das et al., 2006). It is not easy for individual firm to operate profitably and sustain in this volatile, uncertain, chaotic and ambiguous (VUCA) environment due to changes in customer demands, changes in cost of raw material, changes in technology and increasing market competition (Sushil, 2017). This has changed focus of buying firms to formation of collaborative supply networks and work jointly for better results. A collaborative supply network is interlinked with multiple entities that are located globally and heterogeneous in nature which collaborate with the aim of achieving a common goal and supported through a systems network (Camarinha-Matos and Afsarmanesh, 2006).\n2. Literature review: Sustainable supply configuration aims to utilize and manage critical resources and minimize carbon emissions and overall logistics costs (Chaabane et al., 2011). Sustainable supply management involves supplier management activities such as supplier collaboration and supplier development (Akhavan and Beckmann, 2017). However, continuous education, training and monitoring related to environmental and social issues are required for managing suppliers sustainably in the supply network (Jabbour et al., 2017). Supplier development with an objective of solving environmental problems of focal company generally aims to enhance suppliers' environmental performance through sharing of knowledge and shared asset investments. Such activities enhance supplier capabilities in the supply network (Akhavan and Beckmann, 2017). Supplier development involves good communication and coordination which can happen through collaborative partnership among actors in the supply network. Such partnerships involve confidential information sharing, equal profit and risk sharing (Dubey, Gunasekaran and Ali, 2015).\n3. Propositions: Recently, some good articles have appeared in prestigious journals such as, e.g., Akhavan and Beckmann (2017), Ates et al. (2017), Brewer and Arnette (2017), Jia et al. (2017), and Pullman and Wikoff (2017) which helped authors in developing the conceptual model. Therefore, based on synthesis of literature review and discussion with experts from academics the conceptual model has been proposed for further empirical testing which demonstrates the SSN mechanism.\n4. Research methods: This section discusses the research instrument, sample and survey description, common method bias, non-response bias, model fit and quality indices and causality assessment.\n5. Data analysis and model assessment: 5.1 Structural equation model\n6. Conclusion: Research based on IT has shown that a firm's motivation to undertake sustainable practices is affected by pressures from regulators and the market (Arora and Cason, 1995). IT suggests that external forces motivate firms to undertake similar strategic actions (Scott, 2008). Under IT, firms are not only profit seeking entities but also recognize the importance of achieving social legitimacy (Suchman, 1995).\n",
"role": "user"
},
{
"content": "Purpose: The alignment of procurement strategy with overall business strategy is important for superior firm performance. It is also essential for firms' purchasing structure to fit the purchasing strategy whether it is cost reduction; value analysis and/or improving delivery and increasing flexibility for achieving superior purchasing performance. The purpose of this paper is to scientifically examine the influence of organization culture (OC), green supplier development (GSD), supplier relationship management, flexibility and innovation on sustainability in supply network (SSN) under the moderation effect of institutional pressures and resources availability.\nDesign/methodology/approach: The study is grounded by integrating institutional theory (IT) and resource-based view (RBV) theory. The study empirically tests the conceptual model through survey data gathered from 175 respondents in the KwaZulu-Natal province of Southern Africa using structural equation modeling which is a multivariate data analysis technique that permits to simultaneously examine many relationships among manifest and latent variables.\nFindings: It was found that OC plays a crucial role in shaping the workforce behavior and responsible for enhancing GSDs and building good relationship with suppliers which ultimately results into increased flexibility and innovativeness. Coercive pressures (CPs) play a moderating role between the causal relationship of innovation and SSN. Therefore, CPs have got the ability to amplify or reverse the effect between innovation and SSNs.\nResearch limitations/implications: The study suffers from few limitations. First, the study is restricted to one particular province in South Africa. Second, cross-sectional data were used for testing of model in the research study.\nPractical implications: The findings are of great importance for both the researchers and supply chain practitioners. Supply chain practitioners may use this knowledge for benchmarking the supply network configuration process and continuously improve the process. The tested model actually shows the mechanism for enhancing SSNs. In today's dynamic and uncertain business environment, all firms in the supply network aims to minimize risks and maximize profitability. The findings show that CPs play a moderating effect on the relationship between innovation and SSN. Whenever government authorities and institutions pressurize firms for minimizing pollution levels, we see a trend of increased innovations in the form of new product developments, usage of eco-friendly raw material and application of eco-friendly technology which ultimately minimizes the carbon footprint. Through such initiatives steel and engineering firms conform to the government norms and regulations. However, innovations are possible through collaboration with specialist suppliers and sub-suppliers by adopting a common standard across the supply network. Through this overall process of innovation, the focal firm and sub-suppliers linked in the network will benefit and ultimately achieve a competitive edge. Therefore, firms can experience multiple benefits in terms of lower development costs, lower project cycle completion time, improvement in design for re-manufacturability, low downtime, low supply risks, reduction of greenhouse gas emissions and, ultimately, sustainable development.\nOriginality/value: The study is unique from dual perspective - both in terms of scope and contribution to supply chain management literature. First, the study integrates flexibility and innovation to study the impact on SSNs. Second, it integrates both IT and RBV theory of firm to test the moderation effect and the findings further opens up window for future research.\n",
"role": "assistant"
}
] |
Shopping for kids' luxury brands: young mothers' identity quest in retail spaces
|
[
"Identity",
"Luxury",
"Semiotics",
"Pre-adolescent children",
"Retail spaces"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Since the early 2000s, the luxury market[1] has undergone dramatic changes. Under competitive pressure to expand internationally and conquer new markets, luxury brands have developed new retail environments through their flagship stores (Nobbs et al., 2012), whilst retaining their myth and tradition of craftsmanship (Kapferer and Bastien, 2009; Tungate, 2009). But in the near future, luxury brands and retailers need to conquer a new, more emotional generation of luxury consumers - the so-called \"Millennials\"[2] - who expect innovative, technology-driven retail experiences. Less loyal and predictable than the traditional baby-boomer luxury client base, they are also very demanding in their relationship to retailers. To prepare for this generational shift, the sector has initiated strategic change in brand management (Moore and Doyle, 2010; de Lassus and Anido Freire, 2014) to encourage multiple in-store purchases and further coordination between on- and off-line brand presence. Furthermore, inside the Millennials generation, women are considered as the key to growth in the luxury market[3], but their shopping behaviour as related to their life cycle still remains to be explored.\nLiterature review: The literature on luxury products has developed on the premise that this category is very specific.\nResearch methodology: Because of the exploratory nature of this research, we decided to conduct qualitative interviews with women in shops or in \"corners\" of department stores. We focussed on mothers and on the purchase of luxury clothing brands by these mothers for children up to age seven (McNeal, 1992). We interviewed 21 women: all of them were luxury goods customers. We concentrated on mothers as the main influencer within the families rather than fathers, because they purchase such items for their young daughters more frequently. Interviewees were questioned after being approached on leaving the stores.\nDiscussion: This exploratory research shows that in the case of luxury retail spaces for kids, women as young mothers play a determining role in the purchasing process. In the luxury retail context, the success of kids' brands is not only related to the fact that children are now affluent shoppers but also to the idea that women with young children have different motivations including identity-driven motivations for buying luxury products.\nConclusion: The aim of this study has been to provide an initial insight into mothers' motivations, but further research is certainly called for. Research questions to be further explored could include the role of fathers in this process. What is their position in terms of luxury retail shopping? How do they feel about these expeditions? How are the identity issues of mothers related to family-gendered roles? Another possibility would be to look at dyads (mother and adolescent) and their interplay: how does the adolescent feels about these shopping errands in luxury retail with his/her mother? What is his/her response to his/her mothers' identity motivations? This research will have to be conducted with older children (adolescents) who can articulate their thoughts than the ones recruited here. Another research could focus on changes in family structures and how these in might affect the way in which shopping for children's luxury brands is experienced. The present study was conducted in France on a homogeneous sample in terms of ethnocultural origins. Thus further research needs to be carried out on different ethnic groups with a view to examining whether cultural identity issues also play a role in luxury retail spaces. Another line of research would be to conduct international comparisons: are mothers' identity issues in relation to accessible luxury retail relevant in a non-French context? It should also be emphasised that, despite worldwide economic uncertainties, lines for minis will continue to develop. Chanel and Yves Saint Laurent have not yet succumbed to this trend. But Karl Lagerfeld has launched a line of clothes for children under the name Karl Lagerfeld Kids, with two collections a year, showing that this enthusiasm for minis is likely to remain strong in coming years, thus encouraging researchers to conduct further studies on this phenomenon.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to focus on mothers as key influencers in luxury retailing contexts.\nDesign/methodology/approach: Using a semiotic interpretation of mothers' discourses, the authors underline the identity motivations for purchasing luxury apparel for their pre-adolescent children.\nFindings: The paper shows that when shopping for luxury brands for their pre-adolescent children, mothers manage discrepancies between their \"real\" and \"idealised\" selves as well as the pushes and pulls of being a mother and a woman.\nResearch limitations/implications: The findings point to possible future research on this topic, particularly with regard to investigating how luxury stores and retailers can adapt so as to satisfy mothers' identity quest.\nPractical implications: Managers of luxury brand retail spaces looking at the future of retailing could analyse their store environment in the light of these mothers' identity-related motivations. As well as focussing on how children look, store layout and merchandising should provide different spaces for mothers' identity expression, using new in-store digital technologies.\nOriginality/value: This study is one of the first to analyse luxury shopping for children taking the point of view of mothers. The paper underlines how young mothers build their new maternal identity and their projected relationship with their child through purchases of children's luxury goods in specific retail environments.\n",
"role": "assistant"
}
] |
Consequences of KPIs and performance management in higher education
|
[
"Performance",
"Metrics",
"Faculty",
"Appraisal",
"Measures",
"Stress"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Universities were founded on learning and discovery. The work of academics includes teaching, research, innovation, engagement with professions, service to the community and governance (Coates and Goedgebuure, 2012). Nearly 30 years ago, Gmelch et al. (1986) described the \"plethora of roles of an academic, including teacher, adviser, researcher, university citizen, and departmental colleague and mentor\" as stressful. In Australia, the stress associated with academic work appears to be escalating and academics have a constant daily battle juggling teaching, research and the growing demands of an increasingly managerialist higher education sector (Bexley et al., 2011; Field, 2015). The higher education sector seems obsessed with surveillance, measurement and documentation, driving academics to meet targets and performance indicators (Chan, 2015; Field, 2015; Jones et al., 2012).\nKPIs: KPIs are used to gauge organisational performance in relation to strategic and operational goals and are quantifiable measures that reflect factors critical to the success of a university. In the higher education sector, strategic and operational performance indicators are determined by senior management in consultation with financial organisations such as Deloitte and Touche, Ernst and Young, and Price Waterhouse Coopers, for example. KPIs play an integral role in performance and assumed elevated importance after the introduction of performance-based funding in higher education. Performance-based funding is a system that is used by some governments to allocate resources to public universities. Introduced in the UK, performance-based funding had been adopted by eleven countries by 2010 (Organisation for Economic Cooperation and Development (OECD), 2010a) despite some concerns. The funding models use peer review, bibliometrics, performance data and quantitative indicators; the latter include student satisfaction surveys, retention and completion rates, graduates' success, the funding that academics are able to attract, and the research outputs that they generate. As part of performance-based funding, Excellence in Research Australia (ERA) was introduced in 2010 in Australia to \"evaluate the quality of the research undertaken in Australian universities against national and international benchmarks\" (Organisation for Economic Cooperation and Development, 2010b). Indeed, it is government policy to drive Australia's research competitiveness and universities are integral to the research effort (Universities Australia, 2014, University research).\nPerformance management: Performance management is a formal, structured system for measuring, evaluating and influencing an employee's productivity and originated in the 1970s (Field, 2015). Performance management was forced onto academics in many countries during the 1980s even though the \"problem\" that it was meant to \"manage\" has never been identified (Field, 2015). It is prudent to acknowledge that \"performance management did not make the world's great universities what they are today\" (Coates, 2014).\nKPIs and teaching: Student evaluation of teaching (SET) has become a tool to collect satisfaction data about teaching although student evaluations were originally designed to gather feedback that was used to inform and improve teaching. In the current environment, SET is used to generate data which are used to calculate scores; these are used as surrogate KPIs of teaching. There are flaws and biases associated with SET surveys for this purpose (Patrick, 2011), as SET ratings may not be valid and the sample sizes are often not representative (Shevlin et al., 2010). Importantly, the SET scores do not reflect learning or the quality and effectiveness of teaching (Patrick, 2011). SET is compulsory in Australian universities and Australian academics are held accountable for teaching through their SET scores (Cretchley et al., 2014) despite the concerns described above. Interestingly, an American study proved that SET scores were significantly correlated with burnout among academics (Lackritz, 2004). SET is also associated with survey fatigue among students which further comprises the validity of the \"findings\" (Shevlin et al., 2010; Patrick, 2011). Recent anecdotal evidence suggests that some universities even withhold grades until students have completed online SET surveys; this goes against principles of ethical research which is a fundamental requirement in higher education.\nKPIs and research: Research productivity is a major determinant of the performance of academics and a core indicator of university rankings, nationally and internationally (Jung, 2012). A variety of measures was spawned by the h-index which is a measure of scientific value; it has resulted in citation databases which generate metrics. A metric depends on the database from which it is generated. ResearcherID is a metric derived from publications indexed in the Web of Science database. Based on positivism that underpins biomedical research, the metric is not suitable for academics who publish peer-reviewed research in journals which are not indexed in Web of Science. These academics had to trawl though journals, impact factors and citations in order to justify their research while their colleagues who published in journals indexed in Web of Science only had to click a mouse-button. Citation databases such as SCOPUS, ORCID and Google Scholar - the latter is publicly available - fuel the growing demand for quantitative measures.\nOther stressful demands in academia: Pastoral care and duty of care\nConsequences of performance management and KPIs: KPIs fuel competiveness and allow benchmarking, increasing the pressure for universities to compete, nationally and internationally. It is pertinent to remember that the core business of universities is learning (Coates and Goedgebuure, 2012) which is facilitated by the academic profession. Learning occurs among students and among faculty, and to facilitate learning academics use creative, reflective and critical thinking which require deep thought processes. These can only occur when the limbic system of the brain which is associated with emotions and memory, is not unduly aroused and negatively affected by demands and perceived threats. An individual's perception of a demand and his/her ability to meet that demand results in a response to perceived stress (Gmelch et al., 1986). Mild stress evokes a physiological threat response which can enhance performance; however, the \"fight or flight\" response to stress, which is triggered by the sympathetic branch of the autonomic nervous system, impairs creative and reflective thinking. Prolonged stress causes sustained increases in the levels of the sympathetic neurotransmitter adrenaline and the corticosteroid endocrine hormone, cortisol. The highest density of receptors for this hormone is found in the hippocampus, a region of the limbic system which is significantly associated with memory and learning (Lupien and Lepage, 2001). Stressful work, therefore, negatively impacts on cognitive ability; prolonged stress leads to burnout, a state of mental and emotional exhaustion (Lackritz, 2004).\nConclusion: This conceptual paper highlights the multifactorial nature of stresses for academics in higher education and is hypothesis-generating. Various aspects discussed within this paper are underpinned by financial drivers and in the current climate of corporatisation it will be challenging for university leaders to find a balance which permits the generating of revenue in terms of research outputs and student numbers, and the cognitive \"critical thinking-space\" that the work of university academics requires.\n",
"role": "user"
},
{
"content": "Purpose: The core business of universities is learning. Cognitive thinking is critical for learning and the development of new knowledge which are essential in higher education. Creative, reflective and critical thinking are negatively affected by unrealistic demands and stress. The purpose of this paper is to argue that key performance indicators (KPIs) and performance management are detrimental in the higher education sector, as they cause undue stress which impacts negatively on an essential criterion of academia, cognitive thinking.\nDesign/methodology/approach: To explore this issue, the authors discuss the impact of stressful demands in the context of Australian higher education. The paper draws on literature that describes managerialism and on neuroscientific evidence to develop a hypothesis that supports a more holistic approach to human resources management of academics.\nFindings: Performance management and measures (including KPIs) add to the complex demands of academic work despite a lack of evidence that they are appropriate in the higher education sector.\nOriginality/value: Performance management systems and KPIs undermine creative, reflective and critical thinking. Principles governing education should supersede the ever-growing emphasis that is being placed on quantitative measures and bureaucratic demands in higher education.\n",
"role": "assistant"
}
] |
Perceived images and vocations of local territorial systems: implications for place branding strategies
|
[
"Customer experience",
"Place branding",
"Big Five Model",
"Inductional marketing strategies",
"Local territorial systems",
"Personality traits",
"Place image",
"Place marketing"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this issue.\n1. Introduction: Due to the high mobility of capitals and workforce, as well as the rapid circulation of knowledge and information that characterize recent decades, the competition among places to attract human resources, capitals and tourist flows has sharply increased (Avraham and Daugherty, 2009). Consequently, place marketers have developed extensive promotional strategies aimed at raising the attractiveness of their places. In many cases, however, such strategies have failed to achieve their goals. Because marketing efforts tend to leverage common assets (e.g. natural beauties, cultural heritage, etc.), the main categories of place stakeholders - i.e. residents, tourists, local firms and export markets (external firms or customers) (Kotler et al., 2002; Zenker, 2011) - often perceive the promoted places as similar and highly substitutable (Colomb and Kalandides, 2010). In light of this situation, place marketers are currently seeking to devise communication strategies that highlight the unique characters of places and, in doing so, imbue said places with favorable images (Kotler et al., 2002; Usakli and Baloglu, 2011) and appealing personalities (Ekinci and Hosany, 2006). In doing so, such strategies must also consider the goals that diverse categories of users aim to achieve with their \"fruition choices\", i.e. their choice to utilize the products, services and resources of a place. Such goals change from one category of users to another. For instance, residents typically choose the products/services of a given place with the goal of satisfying the need to live and work in a secure and healthy environment. Tourists choose to visit a certain place to satisfy their need for enjoyment. Local firms choose to invest in a given place to take advantage of the material/immaterial resources available there. Export markets choose to establish commercial relationships with a given place to benefit from its products/services (cf. Kotler et al., 2002; Teller et al., 2010).\n2. Perceived images and production orientations of places: Place image is the sum of the beliefs, ideals and impressions that users have toward a place (Kotler et al., 2002). It synthesizes the \"objective knowledge, impressions, prejudice, imaginations, and emotional thoughts an individual or group might have of a particular place\" (Jenkins, 1999, p. 1). It is therefore understood as a mental picture of the tangible and intangible characteristics of a place. Such a mental picture stems from an overall evaluation of the place's physical characteristics as well as users' feelings about the place (Pike, 2002; San Martin and Rodriguez del Bosque, 2008). Seminal research distinguishes the so-called \"organic image\", i.e. the totality of what a person already knows or perceives about the place, from newspaper, radio and TV news, documentaries, periodicals, dramas, novels and non-fictional books and classes on geography and history (Gunn, 1997, p. 37) from the so-called \"induced image\", arising from the exposure to advertisements, travel brochures guidebooks or other promotional sources of information, and the so-called \"complex image\", resulting from the actual experience of interacting with a place (Hankinson, 2004). The literature also points out that place image has cognitive, affective and conative dimensions (cf. Del Bosque and San Martin, 2008). The cognitive dimension is represented by users' beliefs about the characteristics of a place; the affective dimension refers to the emotions that users feel for a place; and the conative dimension is based on the intention to visit, live, work or run a business in a certain place (cf. Alcaniz et al., 2009). According to Bigne et al. (2001), the cognitive dimension encompasses both functional and psychological elements: the former refer to users' perceptions of the amenities, prices, services, etc. available in a place; the latter refer to users' perceptions of intangible characteristics such as the hospitality of residents, the security and reputation of the place and so on.\n3. Perceived place personality: its meaning and measurement: Place personality is a \"metaphor\" of the distinctive features of a place and a relevant dimension of a place's brand (Ekinci and Hosany, 2006). Kaplan et al. (2010) define it as a \"combination of utilitarian, symbolic and experiential attributes\" (Kaplan et al., 2010, p. 1293) that leads users to associate unique features with places. Such an association facilitates the creation of a unique brand personality, which helps differentiate a place from its competitors and establish a sense of identification between a place and its users (Sirgy and Su, 2000; see also Kaplan et al., 2010; Maehle and Rotem, 2009). This means that brand personality increases the distinctiveness of a place and fosters the establishment of affective bonds with a place (Hankinson and Cowking, 1993; Usakli and Baloglu, 2011). Importantly, brand personality may considerably impact users' decision to live in a certain place, visit it, use its resources and/or purchase its products (i.e. their fruition choices). This is especially likely to happen when users perceive a place's brand personality as a reflection of their own personalities (Guido et al., 2013; Schiffman and Kanuk, 2004; Sirgy, 1982). Indeed, by choosing to use the peculiar products, services and resources of a place, users have the opportunity to express the actual or ideal dimensions of the self (cf. Belk, 1988; Sirgy, 1982).\n4. Inductional marketing strategies: Inductional marketing strategies seek to increase customers' involvement with a firm's offers to trigger their purchasing intention (Guido, 2013). In the field of place marketing, inductional marketing strategies can be used to influence users' fruition choices. Through such strategies, place marketers may attempt to induce the diverse categories of actual or potential users of a place - i.e. residents, tourists, local firms and export markets (Kotler et al., 2002) - to experience a place (\"interact\" with it) by using its products/services/resources and hence contribute, through their actions, to the place development (Maheshwari et al., 2011). The strategies in discussion, therefore, aim at motivating users to live in a given place (in the case of residents), visit it (in the case of tourists), take advantage of its resources (in the case of local entrepreneurs) and purchase its products/services (in the case of export markets). To this end, these strategies seek to affect the cognitive, motivational and perceptual determinants of users' satisfaction, namely, the expectations users have about a place's offerings, the desires (i.e. inner goals) users wish to fulfill by utilizing such products/services/resources and the evaluations users apply to their experiences of interacting with such a place.\n5. Research objectives: The present study was aimed at:\n6. Method: To reach the above-illustrated goals, we conducted empirical research in the four LTSs of the \"Province of Lecce\", a Southern Italian sub-regional area. LTSs represent the units of analysis for place marketing strategies. They are not identified by referring to geographical or administrative boundaries, but on the basis of the social, economic and spatial similarities shared by the elementary territorial units of a local area, i.e. cities, towns or other communities. Thus, an LTS groups together basic territorial units characterized by analogous sociodemographic characteristics (e.g. language, ethnicity, education attainment, etc.), economic characteristics (e.g. production specialization, employment rate, etc.) and morphological and physical characteristics (e.g. climate, transport infrastructures, urbanization rate, etc.). LTSs can be characterized by a prevalent vocation, which, as mentioned above, may be stereotypically a tourist vocation or an industrial vocation. LTSs characterized by a prevalent tourist vocation compete with similar LTSs for firms involved in the tourist sector and tourist flows. Likewise, LTSs characterized by a prevalent industrial vocation generally compete with similar LTSs for industrial firms and labor flows. Furthermore, all LTSs compete with each other to attract capital flows and obtain financial support from central administrative authorities.\n7. Results: 7.1 Assessment of the perceived tourist/industrial image of the examined LTSs\n8. Discussion and implications: This study addressed three different, but related areas of research in the field of place marketing:\n9. Limitations and future research: As regards the study limitations, it should be acknowledged that the obtained results have limited generalizability, as they reflect the peculiar characteristics of the examined LTSs. As a consequence, our conclusions about the factors that shape perceived tourist image (specifically, accessibility, the presence of attractions and the hospitality of local population) and perceived industrial image (availability of production infrastructures and capacity to attract inward/outward investments) may not be wholly applicable to LTSs situated outside the examined province. Likewise, the findings related to perceived personality and to the associations between this construct and perceived tourist/industrial image cannot be generalized. Hence, to shed more light on the examined constructs, future studies should investigate a broader set of LTSs, possibly located in strikingly different geographical settings. Finally, as regards this research's third area of interest - namely, the factors able to favorably affect users' fruition choices - future studies could investigate them on a broader scale and also assess the extent to which such factors are capable of influencing users' choices. Such studies could aim at establishing which of the factors in discussion can be considered as valid predictors of users' fruition choices.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to assess the extent to which the perceived images and personalities of places mirror their prevalent production orientations, or \"vocations\". It also investigates the factors that shape the expectations and desires of residents, tourists, local firms and export markets, as well as these users' overall place experience.\nDesign/methodology/approach: In all, 600 questionnaires containing both closed-ended and open-ended questions were administered in four local territorial systems (LTSs) of a Southern Italian province. Data were analyzed by using both quantitative and qualitative techniques.\nFindings: Results showed that place image mirrors the respective productive orientation for only one of the examined LTSs. Meanwhile, for all four LTSs, place image was congruent with place personality.\nResearch limitations/implications: The generalizability of the results is limited, as the research focused on LTSs located in a specific geographical area.\nPractical implications: The paper provides suggestions regarding the formulation of marketing policies aimed at improving the willingness of residents, tourists, local firms and export markets to use the products/services/resources of the studied LTSs. Communication and branding strategies that leverage the personality traits of the examined LTSs are also illustrated in the paper.\nOriginality/value: By examining the coherence among place image, personality, and prevalent vocation, this research addresses a neglected area of investigation. This study is one of the few that provides empirical evidence of misperceptions of the actual production orientation of places.\n",
"role": "assistant"
}
] |
Improving education delivery through community - school partnership: Is the "social contract" being weakened? - A study of two rural schools
|
[
"Decentralization",
"Education policy",
"Education",
"Community participation",
"School–community relations"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: For the past few decades, international commentary on the priorities of nation building has centred on the role of education in acquiring the knowledge required for development and economic competitiveness (Black, 2003). There has also been a resurgent interest in the notions of community participation, social cohesiveness and civic engagement. Many national vision statements present a charter for progress and prosperity based on quality education and community participation. A growing belief that these goals cannot be wholly met through top-down public policy has led to an international interest in new boundary-crossing approaches that bring together the skills, resources and experience of the public sector, private philanthropic organisations and the community (Black, 2003; Latham, 2001).\nExamining the case for community participation in education: The theoretical basis of this paper is partly drawn from Epstein's (1995) overlapping spheres of influence and the Hoover-Dempsey and Sandler's (1997) theory of parental involvement in education. Epstein's (1995) theory of overlapping spheres of influence identifies schools, families and communities as the major institutions that socialise and educate children. A central principle of this theory is that certain goals, such as academic success, are of interest to each of these institutions and are best achieved through co-operative action and support (Epstein et al., 2002). Epstein's model locates the pupil at the centre of this construct.\nStudy context and method: The topic of this study is essentially contextual in the sense that the researched communities have specifically defined profiles - they are fishing, farming and trading communities - that are key to the manner in which school-community partnerships are constructed and the meanings attached to them. With respondents of different perspectives, backgrounds and experiences, using a case study design is most appropriate as this enables the realisation of context-specific insights and reveals the ways they shape the interactions I sought to explore.\nResults and discussions: Community-school partnership - a \"social contract\"\nConclusion: There is a need for policy recognition that at the heart of effective partnerships is good school-community partnerships. Working on improving this relationship will bring more and more the school and the community together and by working together and learning together, social capital is created and used, resulting in increased individual and community capacity. This supports the view that the development and sustainability of community-school partnerships represents a key strategy for rural community development.\n",
"role": "user"
},
{
"content": "Purpose: One of the most debated policy discourses that have engaged the attention of countries as well as the donor community is the varied pathways in improving the delivery of education through decentralisation and community participation. A key policy expectation is that through the active participation of the community, education quality and related outcomes would improve. The purpose of this paper is to explore the policy and practices of community and school partnership and the extent to which the \"social contract\" between communities and schools has been executed. It also explores the challenges they face as they engage in the governance of schools and how such engagement shapes education delivery.\nDesign/methodology/approach: This research was guided by a semi-structured interview schedule focused on two selected basic (i.e. primary and junior secondary) schools and their communities within a Municipality in a Coastal Region of Ghana as a single case for the study. Data were read thoroughly to identify common themes which included the multiple perspectives of participation, teacher management, conflicts and tensions, the role of community elites in school-community partnerships, capacity constraints, parental roles, and issues about the \"social contract\" among others.\nFindings: Drawing on case study data, the paper argues that although decentralisation policies aim at strengthening local democracy and participation, they do not fully consider the conditions under which this might be achieved. Furthermore, in community-school partnership discourses, the impression has often times been given that the policy of education decentralisation is about what communities could do to support schools located within them. The fact of it being a two-way relationship is often not stressed, thus, diminishing the role the school plays or could play in the life of communities.\nOriginality/value: The study reinforced the point that the relationship was a two-way one based on reciprocity, and that it was the fulfilment of the expectations of both parties that shaped the relationship between them and determined the nature of communities' participation in the governance of its schools. Anything to the contrary thus weakens the relationship.\n",
"role": "assistant"
}
] |
Enterprise skills for the economy
|
[
"Entrepreneurialism",
"Education",
"Enterprise economics",
"Skills"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Increasingly, the drivers of growth in modern economies are not large industrial companies, but rather, new and small businesses (Drucker, 1985; Coulson-Thomas, 1999; Keogh, 2002). Fundamentally important in this shift of economic focus are the concepts of innovation and flexibility. Drucker (1985) compares the emergence of the entrepreneurial economy of today with that observed by Schumpeter (1947) of the industrial capitalist economy. Both are based essentially on innovation and the creation and exploitation of invention. Unlike the traditional capitalist economies, however, in the modern sense this comprises knowledge and an attitude shift from the dependence on a technological status quo to the pursuit of new opportunities to improve it, or discover new and wider application of it (Keogh, 2002). It is this that is identified by Keats and Abercrombie (1991) as the engine that drives the economy towards its new state in most nations.Developing a culture to operate within this new enterprise economy is, however, more complex than increasing the number of innovative businesses started. It encompasses all forms of enterprise, including entrepreneurship within organizations (e.g., intrapreneurship, public sector entrepreneurship), as well as the creation and growth of smaller firms, in order that opportunities and innovation are realized to economic advantage. Essentially, within an enterprise economy, increasing demand is made for new and exciting products and services of novelty, quality and perceived consumer or societal value. As Drucker (1994, p. ix) notes, \"in such a period of rapid change the best - perhaps the only - way a business can hope to prosper, if not to survive, is to innovate\".Enterprise skills are a necessary and fundamental requirement of an enterprise economy. To this end, skills necessary for entrepreneurial actions and behaviours must be developed and encouraged in order that the UK can maintain competitiveness within the global enterprise culture. Government appropriately acknowledges this: Prime Minister Tony Blair (2000) states that: \"creating an enterprise culture is vital. It is business and entrepreneurs that create wealth and the young people of today are the entrepreneurs of tomorrow\". Indeed, UK governments have for some time championed the concept of an enterprise culture. For example, the Department of Trade and Industry (1998) states that:...entrepreneurship and innovation are central to the creative process in the economy and to promoting growth, increasing productivity and creating jobs - the UK needs more risk takers who can rapidly turn ideas into products and businesses.\nEnterprise skills for the enterprise economy: UK government policy is also increasingly focusing on assisting the development of enterprise skills sets among the working population, suggesting that, in particular, \"entrepreneurial skills and motivation are central to the dynamism of a commercial society\" (Gavron et al., 1998). The current paper considers the concept of equipping students at university with \"enterprise skills\" and looks at the importance of these in the current and developing economic environment.Young people are currently in a unique position. Increasingly, they can no longer expect to find the traditional \"job for life\" as was arguably the case with the post-war generations. Instead, it is incumbent upon them to take more personal responsibility for employment and financial affairs. There is development towards \"portfolio careers\": contract employment, freelancing, periods of self-employment, etc. (Dearing Dearing, 1997). For educators the implications are obvious. Flexibility and creativity have become necessary skills, and educators therefore have an obligation to meet students' expectations with regard to preparation for the economy in which they will operate. To this end, much has been done in education in the UK to develop skills and competencies for, and attitudes to, enterprise.The higher education (HE) sector obviously has an important role to play in the enterprise economy, as universities \"provide access to a spectrum of knowledge-based resources to support the development of the technologically sophisticated enterprises that are needed to compete in the international market place\" (McMullan and Long, 1987). Indeed, an increasing number of universities now provide access for students to enterprise education, either via standalone modules, or as part of students' core course of study. Young (1997) notes that students may wish to study entrepreneurship for one or both of two reasons nowadays: to enable them to start their own business, or to acquire the knowledge and skills helpful for their career.Gibb (1996) criticised the (then) typical methods of teaching entrepreneurship in Business Schools in the UK for not equipping graduates with the skills to survive in the modern world. He argued that conventional classroom approaches stressed theory and conceptual thinking, and tended to be teacher-led with the emphasis on understanding and critical analysis. Similarly, Rae (1999) claims, \"the skills traditionally taught in business schools are essential but not sufficient to make a successful entrepreneur\". The debate about how to teach enterprise for effective application within the new economy is ongoing (Gorman et al., 1997; Young, 1997). The literature is, however, generally revealing. For example, Hynes (1996) identifies that there is a need to teach students how to identify opportunities in the market place and prepare and equip them with the knowledge and skills necessary to capitalize and manage these opportunities. Kirby (1992) suggests that the challenge is to develop courses and methodologies for teaching which, \"develops in the student, the skills, attributes and behaviours characteristic of the enterprising individual\". To summarise general trends in the literature, commentators tend to agree that the practice of enterprise education is one that requires a blend of knowledge, skills and attitudes, which recognises the synergistic links between management theory and entrepreneurial practice (for example, Weinrauch, 1994; Gorman et al., 1997; Bechard and Toulouse, 1998). Skills necessary include negotiation, salesmanship, leadership, creative thinking, management of technological innovation, new product marketing (McMullan and Long, 1987), confidence; skills in competitive strategy, identifying market opportunities (Simmons and Mason, 1999), financial acumen, organization skills and problem-solving abilities. To these ends much development of enterprise education programmes has occurred.\nThe study: In Scotland, entrepreneurship education at university is relatively well developed. Several universities have had dedicated entrepreneurship units since at least 1995, and entrepreneurship education was given increased focus upon the inclusion of all Scottish HEIs in the Scottish Institute for Enterprise (SIE) in 2002. The SIE is funded by the UK Government's Office of Science and Technology, and comprises a collaboration between all Scottish universities, with a remit to develop entrepreneurship education amongst science and technology students and to increase commercial activity.The aims of the current paper are to test, using data from four of these Scottish universities, the extent to which those who study entrepreneurship believe that it has helped them to develop skills for the enterprise economy, and investigate any differences that may prevail between those attending different faculties. Further comparisons between populations of students in the sample are possible, but will not be considered here as they relate more appropriately to other papers.The current study forms part of a wider research project, which seeks to investigate the impact of enterprise education in HE. Based on theories of planned behaviour which support the idea that ambition can act as a useful predictor of potential (Ajzen, 1991; Autio et al., 1997), previous analysis has suggested that enterprise education will have a positive effect on the number of graduate start-ups in the future, but that this is a long-term, rather than a short-term potential (Galloway and Brown, 2002). In the short term, however, there is evidence that enterprise education is regarded as that which will be of benefit to students when they enter the workforce, regardless of type of employment, i.e. the skills and competencies learnt will be applied within the non-self-employment/business start-up contexts (Brown et al., 2003). From the literature, as well as results from the wider survey of entrepreneurship students, several research questions are raised, and using data from the survey, these questions can be tested. These questions are:1. Does the data support the findings in Galloway and Brown (2002) that self-employment or business ownership, while included as intentions of students after graduation, are long, rather than short-term aspirations?2. What types of employment are students expecting to enter in the short term?3. What are students' perceptions about the skills required for entrepreneurship?4. To what extent do students perceive that these skills are developed or improved by enterprise education?5. Will entrepreneurship education benefit new graduates in non-self employment/ business start-up occupations?\nMethodology: The ongoing study of the impact of entrepreneurship education in HE, from which the current paper draws, comprises a collaboration between four Scottish universities: Heriot-Watt University, Napier University, the University of Paisley, and the University of Strathclyde. Upon completion of an entrepreneurship module students are asked to complete a questionnaire designed to elicit information about student's perceptions of skills development, and ambitions and expectations for their careers. The use of a questionnaire is considered most appropriate in order that consistency can be maintained throughout the four university user groups, thus lending robustness to quantitative analysis. However, the questionnaire-based method is, arguably, not optimal in terms of obtaining information about attitudes and perceptions, as formality and structure inherent in this method are less conducive to establishing this more tacit information than, say, focus groups. As the research is quantitative, other methods were ruled out from an administration point of view, and the drawbacks of the use of questionnaires are limited by the fact that they are administered during the final entrepreneurship class, and by the lecturer, who has established rapport with students throughout the module period. Additionally, questions to which a prose-style answer is required are included, in order to allow for \"free\" answers from which information of a more qualitative (in many cases supplementary or corroborative) nature can be obtained. Built into the instrument is a device that will allow for tracking of students once they have left university in order to facilitate, in the long term, comparison between students' aspirations and graduates' actions.For the current paper a sample was extracted from the longitudinal research. This sample comprised 519 student respondents who had completed an entrepreneurship module during the academic year 2002-2003. As the current study concerns skills perception and development specifically, it was considered important to draw from as homogenous a group as possible in terms of skills and competencies taught. For this reason only those students who had completed a module that sought to teach the processes of entrepreneurship via the business start-up model, and involved the creation of a business plan or report were included. Once other module types had been excluded, a final sample of 451 student respondents was examined.\nResults: Are business ownership or self-employment longer-term aims as suggested by results in Galloway and Brown (2002)?\nDiscussion and conclusions: Results from the current study can be interpreted as generally positive. The data supports the contention that while one of the aims of entrepreneurship education in HE is to increase the rates of graduate self-employment and business ownership, this is most likely to be a long-term rather than a short-term potential. The study finds that students of entrepreneurship expect to enter private sector employment in the short-term, especially large private sector employment. This seems to be the case particularly for engineering students, and this may be due to the nature of the engineering industry, along with the lack of opportunity to start new small firms in engineering particularly, due to high start-up capital costs and pre-registration obligations (see Galloway and Brown, 2002, for a full discussion of this). However, evidence from the current study also suggests that students are increasingly aware of the development of the economy in terms of it becoming less dominated by large firms, as a substantial proportion of all students claim to expect to enter new firms and small firms in the short term. Students appear to have awareness of the skills required for entrepreneurship and a large proportion believes that they are equipped with these skills. Indeed, around half the sample of students perceive that entrepreneurship education has improved their entrepreneurial skills, and the greater proportion of students believe that skills learnt and developed by entrepreneurship education are applicable and important to non-self-employment or business ownership occupations.These results, while not definitive, nevertheless can be regarded as encouraging to those involved in the development and delivery of entrepreneurship education in HE. Entrepreneurship education would appear to have much potential to develop skills and competencies appropriate for the enterprise economy, and the current paper supports generally the idea that these skills and competencies are understood as important and are developed for students in order that they can be applied to economic advantage within post-university employment.\n",
"role": "user"
},
{
"content": "Purpose: - In response to the emergence of an enterprise economy, government claims that building an enterprise culture is vital. Correspondingly, provision of entrepreneurship education in higher education has expanded. The paper aims to assess the potential of entrepreneurship education to develop skills, and of whether students perceive them as having value within the modern economy.\nDesign/methodology/approach: - The paper draws from a longitudinal, collaborative study of students of entrepreneurship in four universities. Using a questionnaire-based methodology, the paper is based on responses from a sample of 519 students.\nFindings: - Results include that any increase in graduate entrepreneurship is most likely to be a long-term. Results also suggest that many students expect to work in new and small firms, and that skills developed by entrepreneurship education are applicable to both waged employment and entrepreneurship. Accordingly, entrepreneurship education seems to have much potential to develop skills appropriate for the enterprise economy.\nResearch limitations/implications: - The research is limited by its quantitative nature. As the primary purpose is to evaluate attitudes to entrepreneurship and perceptions of the economic environment, further research should involve qualitative follow-up, in the form of focus groups and/or longitudinal case studies.\nOriginality/value: - The value of the paper lies in the suggestion that investment in entrepreneurship education is likely to have a positive impact within the economy. The long-term impact of an increase in awareness of entrepreneurship; of the ability to start firms; and an increase in skills transferable to waged employment within an enterprise-based economy, can not be underestimated.\n",
"role": "assistant"
}
] |
A cartographic analysis of the correlation between document supply and citations in pharmacology: a case study from INIST in France (part 2)
|
[
"Document delivery",
"Pharmacology",
"Citations",
"Serials",
"Electronic media",
"Case studies"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction and issues: This paper is the follow-up of our study of the trends in document supply requests from print pharmaceutical journals in a digital environment. In our first article (Boukacem-Zeghmouri et al., 2006), we studied a sample of pharmaceutical serials supplied by INIST-CNRS and observed an overall downward trend in print document supply by INIST beginning in 1999. Four groups of serial titles could be distinguished each with a different evolution of document supply requests.In our second article, we will focus on the link between document supply requests recorded at INIST for the serials sample[1] and the journal citations in the sample taking into account the digital environment and the online use of these journals. The objective of this paper is to confirm the hypothesis that document supply requests for print documents no longer correspond to the realities of the research activities of researchers and therefore can no longer be an index for journal citation (see Bador et al., 2003). Nowadays, it would seem that journals are essentially read in their digital version.Then we will show the break in the link between document supply and citations in the context of electronic journals. The originality of our work resides in showing this break through empirical data.\n2. The sample: We used the same serials sample as in our first paper, i.e. 95 journals listed under the \"Pharmacology and Pharmacy\" heading in Journal Citation Reports (JCR) published by the Institute for Scientific Information (ISI). We added one year to the initial study period of 12 years to cover 1992 to 2004. During this period, the number of journals went from 136 titles in 1992 to 187 titles in 2004 (see Table I).Some journals were excluded from the sample because they were either included in JCR only after 1992 or disappeared from JCR before 2004. We further analysed the following three periods:1. 1993-1994:* New serial titles in JCR. A total of 17 including five titles launched between 1986 and 1989, 11 titles launched after 1990 (five following title merger or change) and one title launched in 1973.* Serial titles dropped from JCR. Four discontinued titles, including three title mergers/changes.1. 1997-1999:* New serial titles in JCR. A total of 39 including 23 titles launched after 1990 (12 title mergers/changes), 12 titles launched after 1995 (seven title mergers/changes).* Serial titles dropped from JCR. Eight discontinued titles, including six title mergers/changes.1. 2002. New serial titles in JCR. Five titles launched in 1986, 1994 and 2001 including two title mergers/changes.These three periods representing six out of the 13 years of our study, show that a large number of serials (79 titles) were excluded from the sample because they either disappeared or appeared during the study period. A total of 36 titles out of 79 were either newly launched or discontinued; others appeared in or disappeared from JCR for reasons that only ISI could explain.Therefore, our sample finally comprises 89 serial titles representing 54 per cent of the average number of serial titles (166) for the 13-year period covered by our study.\n3. Matching citations and document supply: In our first article, we processed raw data. In this study, we proceeded otherwise. First we transformed the citation and document supply values into standard scores (z-scores)[2], a common statistical tool when comparing two distributions with different value scales.The mean of the standardised document supply and citation values is 0, their standard deviation is 1. In this way, we obtained for each year a \"dot cloud\" of journals centred at 0.0 and divided in four groups or \"quarters\". Figure 1 shows the 1999 map.Each dot on the map corresponds to a specific journal, characterised by its standard scores for document supply and citations. To illustrate the result of the mapping, some journal titles are included in the map.Some explanation may be necessary to understand the map and its reduced central values (dots). For instance, in 1999 INIST customers ordered 832 copies from The Journal of Pharmacology and Experimental Therapeutics, and it has been cited by 32,134 articles (ISI). In 1999, the average number of copy orders for the whole sample is 308.71 per journal (INIST), and the average number of citations 4,235.11 (ISI). We then calculated the standard deviation which is the most important measure of dispersion in statistics and allows the quantifying of the repartition of individual values in relationship to the average. The standard deviation for both values of The Journal of Pharmacology and Experimental Therapeutics in 1999 are 239.54 (copy orders) and 5,982.99 (citations). The high standard deviation for both values indicates that they are highly dispersed (distant) from the average values.The Z or standard scores (reduced central values, see Carpenter and Vasu, 1978) are directly derived from the standard deviation:* Axis x (citations 1999): (32,134 - 4,225.11)/5,982.19=4.66.* Axis y (copy orders 1999): (832 - 308.71)/239.54=2.18.In statistics, the reduced central values have an average score of 0 and a standard deviation of 1. For our example, The Journal of Pharmacology and Experimental Therapeutics, this means that this journal figures in the group with highly cited journals and a copy order number above the average.This statistical technique is often applied to classify objects characterised by two different numerical values into four groups around an average value. For instance, in scientometrics a map like Figure 1 is called a \"strategic diagram\".\n4. Comparison and typology of the journals: In a second step, we compared the 13 maps for each year. Since these maps were not easy to read, our approach was to focus on the stability of the journals in the quarters. To do so, we had to define comparison indicators:* The first indicator was to identify in which quarter a journal most often appeared (this indicator is equivalent to the median). We did not assign an inclusion threshold to a quarter. The only parameter taken into account was each journal's score (\"dot\").* The second indicator was to count the number of times a journal changed quarters during the 13 years studied.This method led us to propose a classification of the serial titles into four groups as shown below:1. Quarter 1 serials. A total of 19 titles most often found over 13 years in the upper right hand corner of the maps, i.e. those serial titles accounting for the highest number of document supply requests and the highest number of citations.2. Quarter 2 serials. A total of 16 titles most often found over 13 years in the upper left-hand corner of the maps, i.e. those serial titles accounting for the highest number of document supply requests but a lesser number of citations.3. Quarter 3 serials. A total of 47 titles most often found over 13 years in the lower left hand corner of the maps, i.e. those serial titles accounting for the lowest number of document supply requests and the lowest number of citations.4. Quarter 4 serials. Seven titles most often found over 13 years in the lower right hand corner of the maps, i.e. those serial titles accounting for the highest number of citations but a lesser number of document supply requests.The complete title lists for each group (quarter) are presented in the Appendix.To confirm the representative character of this clustering, we performed a second processing taking into account the average number of citations in JCR and the average number of document supply requests recorded at INIST between 1992 and 2004. Therefore, we transformed for each serial title the document supply request and citation averages over the 13 years into standard scores.Thus, we obtained a new map distributing the 89 serial titles into four quarters (see Figure 2). We observed that, out of the highest quarters, only three journals showed up in different quarters than those of the preceding processing: Pharmacology & Therapeutics and Drug Metabolism and Disposition moved from Quarter 2 (highest) to Quarter 1 (average); the Journal of Natural Products moved from Quarter 3 (highest) to Quarter 4 (average). This somewhat surprising result showed that the data were very stable and confirmed our clustering.Because of the stability and consistency of the two results, we analysed the characteristics of the titles in the four groups obtained by the first processing.4.1. First quarter: most cited, most requested\n5. Global analysis: Once we had confirmed and explained our clustering, we approached this evolution from a broader perspective through document supply requests and citations.5.1. Evolution of the document supply requests\n6. Methodological limitations: By observing the four quarters, we can see that 47 serial titles of the sample never changed Quarters during the period studied. This lack of \"movement\" is significant since it shows a certain amount of stability in the sample.What first comes to mind is that INIST's collections are partly put together on the basis of the public they serve which is composed of public research organisations, higher education, and the corporate sector. This heterogeneity contrasts with the stability we observed in our sample.However, we must draw attention to the fact that our sampling was biased by the limitation to journals with an impact factor, which means that all selected titles were cited several times. Thus, we introduced \"artificial\" homogeneity or stability into the sample. Therefore, it is no wonder that our sample did not reveal many differences.\n7. Conclusion: With the boom in electronic publishing, searching and using electronic resources has become a fact of life in the scientific community. However, the reality is not uniform across all disciplines, even, as the work we report in this article shows, inside a given domain such as pharmacology.We saw also that to understand the break of the link between citation and document supply request we must take into account elements related to the way in which scientific communication processes operate and particularly:* The status of journals rooted in a discipline, which, because of their seniority in the field have historically contributed to the shaping of a discipline and which continue to do so.* The guarantee of quality that they represent for researchers who see them as a \"must\" when they are looking for information.* The steady identity of these journals despite the mergers which have been rampant in scientific publishing for over 30 years and the continuing inflation of journal prices.In fact, the analysis by quarters which we proposed in this article, and which stresses the impact of electronic documents on the requests-citations relationship, also shows that this impact seems limited or differed for serials deeply anchored in a discipline.The third and last article will focus on the characteristics of the requests (customers) in order to give empirical evidence of the underlying dynamics of the evolution of document supply.\n",
"role": "user"
},
{
"content": "Purpose: - The article aims to investigate the correlation between citations and the document supply of print serials.\nDesign/methodology/approach: - The study is based on data from INIST-CNRS for document supply requests and from ISI for citations from 89 serials with JCR impact factors in pharmacology. Data were collected from 1992 to 2004.\nFindings: - The results distinguish four groups of serials with different relationships between document supply requests and citations. The characteristics of the serials of the four groups are described (year of creation, price, etc.). The evolution of the correlation between document supply and citations is analysed.\nOriginality/value: - This is the second of three articles on a longitudinal study over 13 years on different aspects of the relationship between document supply and impact factor (citations) in pharmacology, a scientific domain with a high use of scientific information.\n",
"role": "assistant"
}
] |
Knowledge transfer in interorganizational partnerships: what do we know?
|
[
"Interorganizational Partnerships",
"Alliances",
"Network",
"Collaboration",
"Knowledge transfer"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Knowledge transfer across organizations is central to the development of sustainable competitive advantages, as firms rarely innovate in isolation and depend to a large extent on external partners (Coombs and Metcalfe, 1998; Powell et al., 1996; Ring et al., 2005). Defined as a process through which an organization purposefully learns from another, interorganizational knowledge transfer is an important sub-field of research in partnerships, which has undergone noteworthy progress in recent decades (Easterby-Smith et al., 2008; Battistella et al., 2016).\n2. Methodology: We carried out a systematic literature review following Massaro et al. (2016), Petticrew and Roberts (2008) and Tranfield et al. (2003). A systematic literature review is a scientific method of making sense of a large body of information that explicitly aims to limit the bias of traditional reviews (e.g. preferences of the author for pet theories), mainly by attempting to detect, appraise and synthesize all relevant studies that address a particular set of questions (Petticrew and Roberts, 2008). As to ensure rigor, objectivity and relevance to our methodology, we first developed a literature review protocol that laid out the course of the study, which we detail below. This structured process guided the identification, selection and assessment of the relevant literature and it is efficient and effective (Watson, 2015). In addition to a description of our research topic, our protocol stated the questions we wanted to explore: How is research on knowledge transfer in interorganizational contexts (i.e. partnerships, alliances, networks, joint ventures, etc.) evolving? What do we know about it (the focus) and what do we need to explore (the critique)? What recommendations can we offer to practitioners who face the task of developing a favorable learning environment for alliance partners?\n3. Literature review: Our review shows that the literature covers three overarching themes. The first theme discusses knowledge in itself and elaborates on the types and characteristics. The second theme presents the factors that impact knowledge transfer, either in the decision to form a partnership or during its operation. The third theme encompasses aspects related to the consequences of interorganizational knowledge transfer, i.e. effectiveness and organizational performance. We formulated 14 research questions that reflect these overarching themes, as described in Table II[6]. Next, we discuss the research questions in detail.\n4. Analysis: With the view of coherently integrating our findings, we adopt an antecedents-process-outcomes framework, in line with Bengtsson and Raza-Ullah (2016) and Oliver and Ebers (1998). We therefore organize our analysis in three blocks: factors that antecede knowledge transfer, the process of transference itself and the outcomes. We classified the antecedents as factors that precede knowledge transfer, either driving the formation of the partnership or throughout it. These are variables that influence an organization's decision to establish a partnership, like appropriability regimes, technology sophistication (macro context) and the existing internal processes of the recipient firm. In turn, process variables are those directly connected to the operational dimensions of knowledge transfer, such as routines, managerial supervision and social relations. Outcomes refer to the results (gains and losses) obtained through the partnership.\n5. Opportunities for future research: Our literature review and framework reveal that the study of knowledge transfer requires distinct analytical and methodological approaches. Since interorganizational partnerships are arrangements characterized by different forms of governance and involve at least two organizations, which, in turn, are composed by innumerous individuals, the understanding of knowledge transfer process claims a multilevel analysis. Aspects such as the motivation of individuals and organizations to engage in collaboration along with formal and informal governance instruments require different conceptual backgrounds. As our framework was organized around antecedents, process and outcome factors, it elucidated that the understanding of relationship between levels and variables is a vital issue. Moreover, since our framework unravels the dynamic character of collaboration, it calls for longitudinal studies in addition of quantitative cross-sectional methods that prevail in this research stream.\n6. Conclusions: Knowledge transfer is central to the development of competitive advantages, as organizations increasingly depend on partnerships with external partners. However, this is far from a trivial task, especially when it takes place in interorganizational arrangements. Our systematic literature review aims at uncovering the state-of-the-art on this topic by addressing the following questions: What factors impact knowledge transfer in interorganizational alliances? How do these factors interact with each other? Our study extends existing literature in three ways. First, we offer a synthesis of the variables, how and why they influence knowledge transfer in partnerships. Given the complexity and heterogeneity of the field, we point out the position of individual variables and their segmentation, emphasizing areas of divergence and convergence that had not been self-evident in the literature. We also identify themes that are less well investigated and contested, indicate methodological deficiencies and other weaknesses. We particularly pinpointed the need for multilevel inquiries since, in our view, important interactions have so far not received the attention they deserve. In this way, we have suggested an agenda for future research.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to review the literature on knowledge transfer in interorganizational partnerships. The aim is to assess the advances in this field by addressing the questions: What factors impact knowledge transfer in interorganizational partnerships? How do these factors interact with each other?\nDesign/methodology/approach: The study reports results of a literature review conducted in ten top journals between 2000 and 2017 in the fields of strategy and innovation studies.\nFindings: The review identifies three overarching themes, which were organized according to 14 research questions. The first theme discusses knowledge in itself and elaborates on aspects of its attributes. The second theme presents the factors that influence interorganizational knowledge transfer at the macroeconomic, interorganizational, organizational and individual levels. The third theme focuses on the consequences, namely, effectiveness and organizational performance.\nPractical implications: Partnership managers may improve and adjust contracts, structures, processes and routines, as well as build support mechanisms and incentives to guarantee effectiveness in knowledge transfer in partnerships.\nOriginality/value: The study proposes a novel theoretical framework that links antecedents, process and outcomes of knowledge transfer in interorganizational partnerships, while also identifying aspects that are either less well researched or contested and thereby suggesting directions for future research.\n",
"role": "assistant"
}
] |
A psychological approach to regaining consumer trust after greenwashing: the case of Chinese green consumers
|
[
"Greenwashing",
"Psychological resilience",
"Regaining consumer trust"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Green marketing has gained wide acceptance as a viable competitive strategy within the business world (Chen, 2008a, 2008b; Babiak and Trendafilova, 2011; Won-Moo et al., 2013). The Report on Green Consumers in China, which is published by Ali Research (2016), shows that the number of green consumers on the Ali Retail Platform in 2015 exceeded 65 million, accounting for 16% of active users; this number increased fourteen times over the past four years. Given the high-profit margins associated with green marketing (Menon, 1997), some corporations have been found to overstate or even fabricate information regarding the environmental value of their products (Nick, 2013); this approach is called \"greenwashing\". For example, a corporation may overstate its commitment to reducing the adverse environmental impact of its activities to boost its reputation for sustainable practice (Chen, 2013). In this way, the company can increase its profit margins while avoiding the full cost of the green practices implied by its rhetoric. Sirdeshmukh et al. (2002) demonstrated that there is a positive relationship between the value attributed by customers to products and the trust they have in them. However, greenwashing has increased consumer scepticism and undermined their trust in products, thus reducing consumer demand (Kalafatis and Pollard, 1999). If corporations with similar problems fail to eliminate the trust crisis caused by \"greenwashing\" and fail to successfully win back the trust of consumers, their competitive advantage in the market will inevitably be affected (Zhuang and Zhuang, 2010; Yang, 2015).\nLiterature review: Greenwashing and consumer trust\nFindings: Dominant features of the interviewees\nConclusions and implications: Conclusions\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this study is to suggest an approach to regain consumer trust after negative effects of greenwashing that draws corporations and consumers into a conflicted relationship.\nDesign/methodology/approach: The authors collect and interpret qualitative data from in-depth interviews to develop a theoretical approach that enables the rebuilding of trust between greenwashing corporations and their consumers using the concept of psychological resilience.\nFindings: This analysis indicates that the approach is an interaction between consumers with green brand loyalty and greenwashing corporations. This type of consumer demands emotional factors, functional factors and legitimate factors in the process of psychological resilience, and after greenwashing, corporations should select appropriate recovery strategies to stimulate these protective factors.\nOriginality/value: Previous research studied green consumer trust in the marketing field but did not explore the core of trust which was regarded as a cognitive process. This paper investigates green consumer behaviour under the perspective of psychological resilience and makes an innovative attempt to understand drivers of regaining consumer trust. Previous research works put forward a series of strategies related to regaining trust, but they did not discuss the mechanisms by which these strategies work. Using the method of grounded theory, we attempt to reveal the \"black box\" of consumers cognition after greenwashing and propose a strategy for regaining consumer trust.\n",
"role": "assistant"
}
] |
Privacy calculus or heuristic cues? The dual process of privacy decision making on Chinese social media
|
[
"Self-disclosure",
"Information asymmetry",
"Social applications",
"Flow experience",
"Elaboration likelihood theory",
"Privacy calculus"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The increasing popularity of social networking applications has been a source of many privacy concerns. Activities on social applications may reveal our interests, attitudes and preferences. These user-generated contents are critical for personalizing products and services (Erevelles et al., 2016; Fulmer et al., 2009; Kumar et al., 2010). Users of mobile applications worry that their personal information will be collected, analyzed or transferred without their permissions (Dean et al., 2016). Concerns for privacy loss may hinder user's willingness to disclose personal information. This highlights the importance of understanding the antecedents of users' self-disclosure intentions. As a response, tremendous studies have examined the predictors of self-disclosure intention in online settings (Awad and Krishnan, 2006; Jiang et al., 2013; Wang et al., 2017). Most of their models were tested using western samples with limited attention being paid to China. For cultural and political reasons, conclusions driven from western samples can not necessarily be generalizable to China. One reason is that attitudes and behaviors toward privacy issues and even how privacy is defined greatly differ across cultures. For example, Acquisti et al. (2015) asserted that \"Americans are reputed to be more open about sexual matters than are the Chinese, whereas the latter are more open about financial matters, such as income, cost of home, and possessions\" (p. 513). Thus, the aim of this study is to examine the determinants of self-disclosure intention on mobile social media using Chinese data.\n2. Conceptual framework: 2.1 Competing views on privacy decision making\n3. Research model and hypotheses development: We designed our model by integrating the privacy calculus theory into the ELM. We first hypothesize that central cues and peripheral cues jointly predict self-disclosure intention. We then propose that peripheral cues moderate the relationships between central cues and self-disclosure intention. Specifically, we argue that privacy harms and information asymmetry strengthen the negative relationship between privacy concern and self-disclosure intention and privacy disclosure of friends weaken that relationship. Flow experience strengthens the positive relationship between perceived rewards and self-disclosure (Figure 1).\n4. Methodology: 4.1 Research setting and survey procedures\n5. Data analysis: 5.1 Reliability and validity\n6. Discussion: 6.1 Key findings\n7. Conclusions: This study conveys two messages. First, self-disclosure intention develops along a dual route, including the rational central route and the heuristic peripheral route. Second, we find that peripheral cues moderate the relationship between central cues and self-disclosure intention. Our findings imply that privacy decision making has grown increasingly complex and that no single strategy of encouraging disclosure fits all business circumstances. Service providers are advised to adopt segmentation and targeting strategies to encourage self-disclosure by users as much as possible.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to explore the antecedents of self-disclosure intention on mobile social applications. This study integrates privacy calculus model and elaboration likelihood theory to reconcile the rational and heuristic views of privacy decision making.\nDesign/methodology/approach: Using a \"random dialing\" sampling method, an empirical survey with 913 respondents was conducted. A series of regression models were employed to test the proposed relationships. Robust checks with sub-group analysis were conducted.\nFindings: Self-disclosure intention develops along a dual route including the central route and the peripheral route. When the central route predominates, social media users form their attitudes toward self-disclosure based on a rational calculus of the privacy concern and perceived rewards. When the peripheral route predominates, users perform a more heuristic evaluation of relevant informational cues (information about privacy harms, the extent of information asymmetry between users and operators) and contextual cues (flow experience, privacy disclosure of friends). Peripheral cues moderate the relationships between central cues and self-disclosure intention.\nOriginality/value: This paper extends the Elaboration Likelihood Model by investigating the interaction between the central route and peripheral route. The results provide alternative explanations on the renowned \"privacy paradox\" phenomenon.\n",
"role": "assistant"
}
] |
Improving employees' intellectual capacity for cybersecurity through evidence-based malware training
|
[
"Evidence-based practice",
"Cybersecurity training"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Klein and Prusak (1994) define intellectual capital operationally as intellectual material that has been formalized, captured and leveraged to produce a higher valued asset. According to Kok (2007), intellectual capital consists of three elements: human capital, structural capital (or organizational capital) and relational (customer) capital. Human capital, which includes experience, the know-how, capabilities, skills and expertise of the human members of the organization, plays a pivotal role in the management of intellectual capital by organizations regardless of industry type (Bontis et al., 2000; Maditinos et al., 2011).\n2. Video-based instruction and text-based instruction: Several studies compared video-based instruction and traditional text-based instruction. In general, the study results favor video-based instructions, because videos integrate multiple media such like voice, animation, data and text for transferring the learning (Stonebraker and Hazeltine, 2004). Fern et al. (2002) suggested that videos not only capture and present information but also provide a sensory learning environment that may help learners to understand more and retains information better. Videos have the capability to empower students to engage with concepts and take an active role in learning. Nugent (1982) found that the combination of auditory and visual information is better for retention than those presented through a single information source. Choi and Johnson (2005) found that the video-based instruction was more memorable than the traditional text-based instruction.\n3. Research study: 3.1 Cybersecurity training methods\n4. Discussion: Human capital is considered as the cornerstone of intellectual capital (Kianto et al., 2017). To improve the assets of an organization, organizational investment in intellectual capital must include investment into training and educating the employee. As protecting information-based assets is important, we have no doubt that organizations should provide cybersecurity awareness programs for employees at all levels with the consideration of the job type or the environment they work or deal with (Al-Awadi and Renaud, 2007).\n5. Conclusion: Organizational success is primarily rooted in information-based intellectual activities (Finn and Torgeir, 2008). To protect information-based intellectual activities, organizations must promote continuous education and training to employees on best practices for cybersecurity. However, a lot of cybersecurity awareness training is not effective. Although providing employees with security requirements, guidelines and policies is essential, they are often general in nature. Individual employees may have general knowledge about information security but many of them lack experience in dealing with various malware attacks as malware continues to increase in frequency and complexity. Individual employees also have a different perception of the security vulnerability, severity or extent of the damage (Ng et al., 2009). Some employees may be insensitive to training if they do not feel that they will fall victim to information security crime. Thus, there is a need to further explore and develop more effective evidence-based cybersecurity training approaches and materials. Future research could use big data techniques to better understand individual employees' needs and interest, and to enhance personalized training and learning (Sedkaoui and Khelfaoui, 2019).\n",
"role": "user"
},
{
"content": "Purpose: An organization's ability to successfully manage intellectual capital is determined by the actions of its employees to prevent or minimize information security incidents. To prevent more data breaches to intellectual capital, organizations must provide regular cybersecurity awareness training for all personnel. The purpose of this paper is to investigate the effect of different evidence-based cybersecurity training methods on employees' cybersecurity risk perception and self-reported behavior.\nDesign/methodology/approach: The study participants were randomly assigned into four groups (i.e. malware report, malware videos, both malware report and malware videos and no interventions) to assess the effects of cybersecurity training on their perceptions of vulnerability, severity, self-efficacy, security intention as well as their self-reported cybersecurity behaviors.\nFindings: The results show that evidence-based malware report is a relatively better training method in affecting employees' intentions of engaging in recommended cybersecurity behaviors comparing with the other training methods used in this study. A closer analysis suggests whether the training method contains self-relevant information could make a difference to the training effects.\nOriginality/value: This paper reports an in-depth investigation on how different evidence-based cybersecurity training methods impact employees' perceptions of susceptibility, severity, self-efficacy, security intention as well as on their self-reported cybersecurity behaviors.\n",
"role": "assistant"
}
] |
The work engagement grid: predicting engagement from two core dimensions
|
[
"Engagement",
"Involvement",
"Energy",
"Work engagement grid"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In the last decade, the scientific literature has validated the importance of considering work engagement for many relevant outcomes such as employees' performance (e.g. Demerouti and Cropanzano, 2010), leaders' creativity and charisma (e.g. Bakker and Xanthopoulou, 2013) and well-being (e.g. Bakker and Leiter, 2010), both at the individual and at the team level (e.g. Salanova et al., 2003; Costa et al., 2014b). Most of the research on work engagement at the individual level validated its three-factor structure, composed by vigor, dedication and absorption (e.g. Bakker and Leiter, 2010). At the team level, however, this three-factor structure has not been found in some studies (Costa et al., 2014a), where those factors collapse into a single dimension. The dimensionality of the construct may, therefore, distinguish it between levels. Indeed, as team work engagement (TWE) is defined as an emergent phenomenon (Kozlowski and Chao, 2012), it will develop from team members' interaction. On the contrary, individual work engagement is defined as an individual state of work-related well-being and therefore does not need any external manifestation to develop. It is likely, therefore, that the key theoretical dimensions of work engagement weigh differently at the individual and team levels.\nTheoretical background: Work engagement is a positive, fulfilling, affective-motivational state of work-related well-being (Leiter and Bakker, 2010). In the last ten years, many studies have demonstrated its importance for both the performance and the well-being of employees (Halbesleben, 2010). For example, Schaufeli et al. (2008), in a study of middle managers, found that work engagement was related to working extra hours, job satisfaction, as well as good social functioning and positive health. Also, Bakker (2009) reports that engaged employees are self-efficacious individuals who actively act in order to influence events that are relevant for their lives. They are characterized by high energy and activity levels and by a positive attitude that allows them to create their own positive feedback and recognition. Work engagement has also important consequences for organizational outcomes (Schaufeli and Bakker, 2010), such as organizational commitment, personal initiative or extra-role behavior. For example, Salanova and Schaufeli (2008) in a study of a Spanish (n=386 technology employees) and a Dutch (n=338 telecom managers) samples found that work engagement mediated the relationship between job resources and proactive behavior and that this relationship was invariant across countries.\nThe present study: Considering that, to our knowledge, there is no measure that encompasses both energy and identification/involvement, we developed an alternative one, based on the two theoretical developments already described. Our proposal (cf. Figure 2) reflects the two core dimensions of energy and involvement. The energy dimension is represented by the vertical axis (from high energy to low energy or exhaustion); the horizontal axis represents the involvement dimension (from low involvement to high involvement). The circle was transformed into a grid, where respondents signal with a cross the place that best describes their work-related state in a given timeframe. The grid emerges, then, from the crossing of the two axis, each with eight points, and is therefore composed of 64 possibilities for answers. We have opted for an eight-point scale in order to, in future studies, be able to define four quadrants, and to position work engagement in the upper right one, as defended by Bakker and Oerlemans (2011). In the extreme of each quadrant, a brief description of what it represents is provided (e.g. in the lower right corner, representing low activation and high involvement: \"I/we do not feel energetic when I am/we are working but I am/we are enthusiastic about my/our work\"):\nMethod: Participants and procedure\nResults: Individual work engagement\nDiscussion: This study tested whether the dimensions of energy and involvement predict work engagement, both at the individual and at the team levels, using two distinct but equivalent samples of a total of 1,192 individuals. The present work contributes to the understanding of work engagement at different levels of analysis, as well as to the distinction between the individual and team-level construct, considering its structure. Indeed, we had already different theoretical models for understanding work engagement at the individual (e.g. Demerouti et al., 2001) and team (e.g. Costa et al., 2014b) levels, which implied different specific predictors for each. We also had empirical work focussing on the structural part of the construct that had failed to confirm the three-factor structure at the team level (Costa et al., 2014a). This work provides a possible framework for understanding the structural differences between levels, suggesting that individual and team-level work engagement differ in the predominance of one or two core dimensions: energy and involvement. The results support the claim that both dimensions predict individual work engagement. At the team level, engagement seems to be essentially defined by the existence of high levels of energy.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to test whether work engagement can be predicted by two core dimensions, energy and involvement, both at the individual and team levels.\nDesign/methodology/approach: Based on the circumplex model of affective well-being (Russell, 1980), the authors propose the work engagement grid and collect data on individual and team work engagement (TWE) from two different samples (n=1,192 individuals).\nFindings: Results show a significant positive relationship between the individual engagement grid and individual work engagement. However, only the energy dimension significantly predicted TWE. The authors also provide evidences for the relationship between the engagement grid and related variables (e.g. adaptive performance, team cohesion, satisfaction), and show that the combination of energy and involvement present smaller correlations with those variables than the complete engagement scales.\nResearch limitations/implications: Data were collected from simulation samples, therefore generalization of the findings must be done with caution. The findings allow for developing a brief measure of work engagement, particularly useful for longitudinal or diary study designs.\nPractical implications: When teams are the work unit, the displays of energetic behaviors ought to be fostered in order to boost collective engagement.\nOriginality/value: The authors add to the existing literature on work engagement, concluding that individual and team-level work engagement have structural differences between them, with the collective construct being dependent on external manifestations of energy, and that individual work engagement needs a cognitive component of absorption in order to foster performance.\n",
"role": "assistant"
}
] |
Influence of big data and predictive analytics and social capital on performance of humanitarian supply chain: Developing framework and future research directions
|
[
"Performance measurement",
"Disaster management",
"Humanitarian supply chain",
"Social capital",
"Humanitarian operations",
"Big data predictive analytics"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In recent years, there has been an increasing trend of disasters in different parts of the world (Pateman et al., 2013). Each disaster has a long-term social, environmental and economic impact on the region. The community affected by the disaster loses its ability to recover its original state and needs external support for a considerable length of time. Humanitarian supply chain (HSC) is a process of mobilizing people and resources required to save lives and help people affected by a disaster (Van Wassenhove, 2006). This field has now moved from the stage of just managing a disaster to a systematic approach of HSC (Chandraprakaikul, 2010; Oloruntoba et al., 2016; Behl and Dutta, 2018). As the instances of natural disasters such as earthquakes, tsunamis or landslides occur without any warning, the study provides a holistic perspective of disasters and how to manage them more effectively.\n2. Literature review: The study is structured on a systematic literature review, a widely used methodology proposed by Tranfield et al. (2003). The review is conducted in two phases. In the first phase, HSC papers are shortlisted that discuss the role of stakeholders and challenges faced by humanitarian organizations. In addition, keywords such as \"humanitarian supply chain,\" \"disaster supply chain,\" \"humanitarian logistics,\" \"humanitarian supply chain performance,\" \"supply chain management\" are used and leading database such as EBSCO, JSTOR, Scopus and Web of Science and journal platforms such as Elsevier, Emerald, Taylor and Francis, Springer are explored (Table I provides a listing of keywords used for searching the literature from journals, business magazines and web search engines). While Phase 1 study focuses on studying the key dimensions of HSC, the focus in Phase 2 is on exploring performance indicators for HSC, role of BDPA and social capital on the performance of HSC.\n3. Model and discussion: The proposed conceptual model grounded in resource-based view theory and social capital theory is shown in Figure 4. Drawing from a resource-based view theory and extant literature bundling of tangible resources (basic resources, data, technology), human resources (technical skills and management skills), intangible resources (data-driven culture and culture of continuous learning within an organization) lead to higher order capability in BDPA (Gupta and George, 2016). Data analytics capability is itself a core component to drive and improve the supply chain resilience and gain competitive advantage (Dubey, Gunasekaran, Childe, Fosso Wamba, Roubaud and Foropon, 2019; Dubey, 2019). Moreover, BDPA has a significant effect on visibility and coordination (Dubey, Luo, Gunasekaran, Akter, Hazen and Douglas, 2018). It helps in guiding humanitarian operations and makes them responsive, impactful, sustained and efficient (Swaminathan, 2018). This can further improve the supply chain performance. There is an important linkage between the resource dependence theory and big data analytics (Prasad et al., 2018). With these resources at disposal, humanitarian organizations exhibit higher levels of visibility and coordination (Dubey, Luo, Gunasekaran, Akter, Hazen and Douglas, 2018) that leads to an improved ability of sharing the information among the stakeholders. This is especially important in the HSC context as there are multiple stakeholders such as beneficiary, donors, humanitarian NGOs, military and governments. Improved coordination among them helps in improving performance and meeting the goals of the HSC. In most disaster situations, expectation is either from the government or the military to act swiftly, save lives and help the victims. One of the most relevant aspects often overlooked in disaster relief operations is the role of local community in helping each other in such crisis. DM is considered to be an engineering issue and the solutions are often sought from external resources (Nakagawa and Shaw, 2004).\n4. Implications: This study establishes the role and importance of BDPA and social capital in HSC performance. It has both academic and managerial implications which are discussed in this section.\n5. Conclusion, limitations and future research directions: As the number of instances of natural disasters and their intensity are growing, there is an increasing need for research on focused areas of HSC operations. The field has evolved from a response stage to a systems approach of managing an HSC (Behl and Dutta, 2018).\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is threefold: first, to investigate the role of big data and predictive analytics (BDPA) and social capital on the performance of humanitarian supply chains (HSCs); second, to explore the different performance measurement frameworks and develop a conceptual model for an HSC context that can be used by humanitarian organizations; and third, to provide insights for future research direction.\nDesign/methodology/approach: After a detailed review of relevant literature, grounded in resource-based view and social capital theory, the paper proposes a conceptual model that depicts the influence of BDPA and social capital on the performance of an HSC.\nFindings: The study deliberates that BDPA as a capability improves the effectiveness of humanitarian missions to achieve its goals. It uncovers the fact that social capital binds people, organization or a country to form a network and has a critical role in the form of monetary or non-monetary support in disaster management. Further, it argues that social capital combined with BDPA capability can result in a better HSC performance.\nResearch limitations/implications: The proposed model integrating BDPA and social capital for HSC performance is conceptual and it needs to be empirically validated.\nPractical implications: Organizations and practitioners may use this framework by mobilizing social capital, BDPA to enhance their abilities to help victims of calamities.\nSocial implications: Findings from study can help improve coordination among different stakeholders in HSC, effectiveness of humanitarian operations, which means lives saved and faster reconstruction process after disaster. Second, by implementing performance measurements framework recommended by study, donors and other stakeholders will get much desired transparency at each stage of HSCs.\nOriginality/value: The findings contribute to the missing link of social capital and BDPA to the existing performance of HSC literature, finally leading to a better HSC performance.\n",
"role": "assistant"
}
] |
In search of a living wage in Southeast Asia
|
[
"Indonesia",
"Cambodia",
"Trade unions",
"Living wage",
"Minimum wage"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The status of Southeast Asian countries as labour-intensive manufacturing hubs within global supply chains has drawn international attention to their varying labour standards and employment conditions. International concern over weak social protection and wages for workers in labour-intensive manufacturing across the region is largely focussed on global corporations that source from supplier firms in a region characterised by an evident gap between the formal commitments of states to international labour standards and the reality of their limited capacity and enforcement (Ford and Gillan, 2016). Attention to wages and conditions are most animated in the garment and footwear sector because of the high profile of consumer fashion and leisurewear brands, debate over whether the industry helps or hinders sustainable economic development in nations that are reliant on it for export revenues, links to questions of gender justice given the sector's female-dominated workforce, and, of course, the cumulative effect of decades of consumer-focussed union and NGO campaigns. As a consequence of these pressures, debates over the definition, processes and outcomes of minimum and \"living\" wages are often politically contentious in garment-producing countries.\n2. The concept of a living wage: To a large extent, the literature on the concepts and practices associated with the living wage has focussed on developed rather than developing countries[2]. Thinking about the formulation and realisation of a living wage in a region like Asia accentuates the obvious but crucial significance of context: a living wage campaign in, for instance, a global city like London may indeed target workers who are in low-paid and precarious employment, but constitutes a very different economic and social terrain from developing Asia where labour markets are defined by the structural predominance of informal work and very limited social welfare provisions. Thus, whereas living wage campaigns in Europe or North America may broaden out to incorporate issues such as social inclusion, in developing society contexts with \"limited employment or trade union rights, where any minimum wage rates are very low and weakly enforced and state systems of social security are rarely provided\" (Parker et al., 2016, p. 3).\n3. The Asia floor wage as a regional living wage strategy: The second distinctive feature of the Asia Floor Wage is the initiative's Asia-wide focus. Its steering committee includes a number of labour NGO activists from Europe and the USA, most notably the Clean Clothes Campaign (AFWA, 2013). But while the AFWA has support from international civil society organisations, the scope of its activities is limited to Asia - represented in the alliance by Bangladesh, Cambodia, China, Hong Kong, India, Indonesia, Malaysia, Pakistan, Sri Lanka and Thailand (AFWA, 2014b) - and its demands framed as emanating from, and driven by, labour activists based there (Bhattacharjee and Roy, 2015a).\n4. The Asia Floor Wage, collective bargaining and wage-setting: Despite its apparent lack of success engaging with the global unions, the AFWA approach is notable for its emphasis on trade union involvement, which sets it apart from many initiatives for monitoring and improving standards for workers in labour-intensive manufacturing driven by corporations, states or NGOs in various combinations, which are often criticized as both ineffective and in some cases a de facto mechanism of union substitution[7].\n5. Campaigning for a living wage in Southeast Asia: Before examining the implementation of a living wage campaign strategy in the two selected countries, it is necessary to briefly note the broader demographic and economic context of the garment industry in the Asian region. Asia's export garment-producing countries are clustered in South and Southeast Asia. Within Southeast Asia, garments constitute a significant proportion of manufacturing exports in Cambodia, Indonesia and Vietnam (see Table I).\n6. Conclusion: All global or regional union or advocacy groups inevitably confront challenges when working in and across multiple national contexts. These include the difficulty of resourcing and then sustaining campaigning interventions and developing effective organisational structures across scale, as well as the relative incentives or disincentives for local or national unions to engage in regional or global networks or campaigns. As a consequence, the reach and effect of transnational wage campaigns can only be understood with consideration of the national economic, political and institutional context that drives the strategies and priorities of constituent organisations, governments and industry stakeholders. Through its actions and communications, the AFWA has no doubt contributed to increased awareness in the region of the living wage concept and, alongside initiatives and campaigns by other organisations, attention to many non-wage problems that have a negative impact on the labour rights and wellbeing of garment workers in global supply chains. The Asia Floor Wage has gained some traction with international NGOs such as the Fair Labor Association and the Workers Rights Consortium and, as noted earlier, has been adopted as a living wage benchmark by the Fair Wear Foundation (Bhattacharjee and Roy, 2012). However - in Cambodia and Indonesia at least - the campaign has had a minimal effect on the discourse and practice of brands, local companies and government agencies, or even local trade unions.\n",
"role": "user"
},
{
"content": "Purpose: Debates over the definition, processes and outcomes of minimum and \"living\" wages are heated and often politically contentious in garment-producing countries. Internationally, there have been various initiatives to promote and support the implementation of a living wage for workers in labour-intensive manufacturing, ranging from corporate-driven social responsibility and multi-stakeholder initiatives to the long-standing living wage campaign of the global unions. One prominent regional initiative is the Asia Floor Wage Alliance (AFWA). The purpose of this paper is to assess its reach and effect in Southeast Asia.\nDesign/methodology/approach: A living wage campaign is assessed with reference to Indonesia and Cambodia, two important garment manufacturing countries in Southeast Asia. The paper draws on data collected in interviews with garment manufacturers, brand representatives, trade unionists and labour NGO activists, including members of the AFWA Steering Committee in Indonesia and Cambodia, complemented by a systematic review of documents and reports produced by the AFWA.\nFindings: As the paper shows, despite a series of initiatives, the Asia Floor Wage has failed to gain traction in Cambodia or Indonesia. This is so, the paper argues, because national economic, political and institutional contexts are the primary drivers of the strategies and priorities of constituent organisations, governments and industry stakeholders. In the absence of robust local and regional coalitions of trade unions, efforts towards a common and coordinated regional approach to living wages are thus unlikely to gain traction.\nOriginality/value: To a large extent, the literature on the concepts and practices associated with the living wage has focussed on developed rather than developing countries. This paper extends the literature by providing a systematic examination of a transnational wage campaign in developing Asian countries.\n",
"role": "assistant"
}
] |
A random effects multinomial logit model for the determinants of exit modes: Evidence from a panel of US manufacturing firms
|
[
"Manufacturing firms",
"Panel data",
"Exit modes",
"Random effects multinomial logit model"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The literature on firm exit defines exit as an event, which implies the termination of firms' economic activities together with the end of its legal corporate entity. There are several modes of exit, for instance, liquidation as a mode of exit results in the sale of assets and loss of legal corporate entity. However, the decision to liquidate could be voluntary or involuntary. Moreover, acquisition as a mode of exit implies the transfer of assets and legal corporate entity to the acquirer. It is therefore argued that a firm can possibly exit in three alternative modes, namely voluntary liquidation, involuntary liquidation and acquisition. In recent years, some studies have examined the factors associated with modes of exit and also treated exit as a heterogeneous entity. For instance, in the late 1980s, few empirical studies found differences in the determinants of exit modes and they did not receive much attention from researchers (Hudson, 1986; Pastena and Ruland, 1986; Lau 1987). It was argued in the literature that exit modes have different causes and consequences, therefore these modes of exit were treated as mutually exclusive events (Schary, 1991). Precisely, the idea of heterogeneity in exit modes has its roots in the seminal work of Schary (1991). Schary's theoretical model justified the inclusion of exit modes in the analysis, and accordingly empirical results suggested that pooling of exit modes created a bias toward the prediction of exit. In other words, in such a setting, empirical findings could be a mere representation of factors determining firm exit, and not three modes of exit such as voluntary liquidation, involuntary liquidation and acquisition. As a result, this argument created a strong foundation for future research in the area of firm exit and emphasized the need for considering modes of exit as mutually exclusive events. In this context, few empirical studies validated the Schary's argument and provided evidence on the existence of differences in the determinants of exit modes in many countries such as Australia, Germany, The Netherlands, Spain, UK and USA (Love, 1996; Harhoff et al., 1998; Koke, 2002; Prantl, 2003; Heiss and Koke, 2004; Buehler et al., 2006; Bhattacharjee et al., 2007; Jones and Hensher, 2007; Esteve-Perez et al., 2010; Cefis and Marsili, 2012). Moreover, nowadays it is largely accepted that empirical research should consider all modes of exit in examining the determinants of firm exit.\n2. Exit as a heterogeneous entity - a brief literature review: The seminal work of Schary (1991) was the first study in the literature that had investigated the determinants of exit by accommodating the three modes, namely voluntary liquidation, involuntary liquidation and merger/acquisition in the analysis. The study explicitly argued that modes of exit are mutually exclusive events because each exit mode has different causes and consequences on stakeholders[1], and therefore cannot be pooled together. Moreover, the empirical analysis in the study suggested that the model with modes of exit, that is, MNL model had more predictive power than the pooled model. Hence, this finding built a strong foundation for future studies in the field of exit putting emphasis on modes of exit as mutually exclusive events. For example, Harhoff et al. (1998), Koke (2002), Heiss and Koke (2004) and Prantl (2003) investigated the determinants of exit modes of German firms, Buehler et al. (2006) of Swiss firms, Love (1996) and Bhattacharjee et al. (2007) of UK firms, Jones and Hensher (2007) of Australian firms, Cefis and Marsili (2012) of Dutch firms and Esteve-Perez et al. (2010) of Spanish firms.\n3. Empirical specification: The empirical specification is based on a recently developed econometric model, that is, REMNL model. Since the aim of this study is to examine the determinants of exit modes as mutually exclusive events in a multivariate setting using panel data, this model is found to be more appropriate than other models, namely multinomial probit and nested logit models. There are many advantages of REML model, in particular, it is computationally efficient, accounts for the unobserved heterogeneity and considers correlation in the error terms. The REMNL model is an extension of MNL model and is an innovative modeling technique which has gained much popularity in comparison to other models. A detailed review of such models was provided in Hedeker (2003) and Chen and Kuo (2001). In recent years, there has been extensive research in the REMNL model and those recent developments were discussed in Skrondal and Rabe-Hesketh (2004).\n4. Data and variables: 4.1 Data\n5. Results and discussion: In panel data analysis, the FE and RE models are often estimated to select the best model for the analysis and interpretation of results. Therefore, fixed effects multinomial logit (FEMNL) and REMNL models are estimated and their results are reported in Tables III and IV, respectively. Initial inspections of results suggest that there are some visible differences in determinants of exit modes across both the models (Table V). To check the statistical validity of differences and select the best model, Hausman test is utilized. The null hypothesis is that the REMNL model is a preferred model than FEMNL model. The kh2 test statistic was 12.365 and it was found statistically insignificant with 15 degrees of freedom. The test results cannot reject the null hypothesis and implies that the REMNL model is most appropriate model in the context of examining determinants of firm exit using panel data of the US manufacturing sector. Furthermore, interpretation of results along with robustness checks and sensitivity analysis are based on the REMNL model. In Table IV, first two columns report the logit coefficients and standard errors for voluntary liquidation, third and fourth columns report the logit coefficients and standard errors for involuntary liquidation, and fifth and sixth columns report the logit coefficients and standard errors for acquisition modes of exit, respectively[8].\n6. Concluding remarks: This study has attempted to examine the determinants of three exit modes, namely voluntary liquidation, involuntary liquidation, and acquisition using a panel data of US manufacturing firms. The data set is a panel of 4,408 firms (35,149 firm-year observations) spanning over the period of 1976-1995. In this study, the response variable is a categorical variable which is coded as 1 for voluntary liquidation, 2 for involuntary liquidation, 3 for acquisition and 4 for non-exit. The nature of response variable and mutually exclusive characteristic of exit modes demands for advanced econometric techniques to estimate the relationship between exit modes and a set of explanatory variables. This requirement becomes more appealing in case of panel data where correlations in the error terms are very common. Therefore, a REMNL model is estimated which serves these purposes with an additional advantage of controlling the firm-specific unobserved heterogeneity in the sample.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine the factors associated with three modes of firms' exit (voluntary liquidation, involuntary liquidation and acquisition) in a mutually exclusive environment. In particular, three modes of exit are treated as independent events given that different causes and consequences exist for each exit mode. The data set is a panel of 4,408 US manufacturing firms spanning over the period 1976-1995.\nDesign/methodology/approach: The discrete choice model is used to establish a relationship between modes of exit and a set of explanatory variables, which are specific to the firm, industry and macroeconomic conditions. Use of panel data encourages us to estimate a random effects multinomial logistic regression model, which allows exit modes as mutually exclusive events and at the same time controls the firm-specific unobserved heterogeneity in the sample.\nFindings: The analysis suggests that the determinants of voluntary liquidation are age, size, profitability, technology intensity and inflation level. The determinants of involuntary liquidation are size, leverage, profitability and inflation level. For acquisition, determinants are age, size, advertising intensity, Tobin's q, GDP growth, inflation level and interest rate. The findings suggest that exit modes have a different set of determinants and the scale of effects of some common determinants such as age, size and profitability differs between exit modes.\nResearch limitations/implications: The analysis presented in this study relies on data from US manufacturing firms only. Thus, there is a need to explore the determinants of exit modes in other countries as well using the proposed econometric model.\nPractical implications: The findings presented in this paper are useful for managers and policymakers to design strategies/actions for avoiding particular mode of exit.\nOriginality/value: This study provides empirical evidence on the differences in factors associated with exit modes and confirms the existence of mutually exclusive nature of exit modes. Findings suggest that for future empirical studies on firm exit, the exit modes must be treated as a heterogeneous event.\n",
"role": "assistant"
}
] |
Assessing the market potential for a local food product: Evidence from a non-hypothetical economic experiment
|
[
"Spain, Food products",
"Prices",
"Consumer behaviour, Willingness to pay",
"Market research",
"Experimental auctions",
"Random nth price",
"Reference prices",
"Mongeta Ganxet",
"Sample selection model"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The agro-food sector is continually seeking ways to differentiate products through perceived quality and innovative design or by considering environmental or safety issues. This strategic orientation is not aimed at increasing production, but rather towards diversifying the food supply by exalting those attributes or benefits, which may be valued by the consumers, and thereby, differentiate the product from competitors. The so-called \"typical\" or \"local\" food products have occupied a preferential place within the agro-food sector because their particular identity is supported by the quality and origin of its raw materials, as well as the peculiarity of the production method used. \"Typical\" food products can include both non-processed and processed products. Within the former, the level of quality and differentiation has been determined either by the land's natural characteristics and/or climatic conditions or by specific farming systems. Yet, for the initially processed products, the typicality results from the posterior production method, permitting the raw materials to be imported from other areas (Caldentey and Gomez, 1996). The close link between typicality and territory likely explains why \"typical\" food products are usually referred as Origin Labeled Products (OLPs), even when there is no specific legal instrument protecting their name. Within food quality policy, the European Union has recently strengthened its action to protect OLPs, in particular through EC Regulation 2081/92, by introducing Protected Designations of Origin (PDOs) and Protected Geographical Indications (PGIs) as two legal instruments for protection.At the end of 2005 in Spain, 138 PDOs and PGIs were officially registered (excluding wines), of which 17 (12 percent) were located in Catalonia (North-East region in Spain, where Barcelona is the main city) (MAPA, 2005). Each year new \"typical\" products are being recognized as PDO or PGI, such as the \"Mongeta Ganxet\" from the \"Valles Maresme\", the food product we are analyzing in this paper. \"Ganxet\" is a traditional variety of beans (leguminous). The plant has white flowers and its seeds are white, flat and kidney-like in shape (ganxet means small hook in the Catalan language). Although this variety provides a much higher percentage of proteins and a lower percentage of starch than other varieties (Casanas et al., 1998; Santalla et al., 2001), it is not farmed in any other parts of the world.In the last ten years, intensive research has been undertaken to recover this variety as well as its organoleptic characteristics and to allow for a controlled production system. Quite recently, the Association of Producers of the \"Mongeta Ganxet\" (MG) was created whose early task was to request from Spanish authorities the designation of a PDO. However, a lot of effort is still needed in the existing production systems to meet the regulations requested by the PDO. It is also necessary to obtain information about the market potential or how consumers would value this variety in the market.This paper aims at analyzing the market potential of the MG in Barcelona, with around 4 million potential consumers, and located between 30 and 50 km from the production area. We assess the market potential by calculating, through a non-hypothetical economic experiment, consumers' willingness-to-pay (WTP) a price premium for this variety over the conventional counterpart. Unlike other hypothetical valuation methods, such as contingent valuation, where the surveyed persons tend to overestimate the valuation of the product, we use experimental auctions to simulate a real market situation by offering the participants real products and allowing an exchange of real money. This way, the participant will incur real costs if he or she deviates from his or her equilibrium strategy (i.e. non-truthful valuation by respondents can provide them a loss of benefits).Additionally, we analyse the sensitivity of the WTP to additional information provided to participants. This information includes reference prices, leaflets, and tasting. Finally, we also estimate a model to assess the main determinants of consumers' WTP. Taking into account the previously mentioned objectives, the contribution of this paper is threefold. First, it is one of the first attempts in Spain to use non-hypothetical economic experiments (i.e. experimental auctions) to elicit the WTP for a food product1. Second, although this paper applies established methods in the experimental protocol (the nth random price auction, the introduction of referenced prices and the release of additional information during the experiment, including tasting), it is the only one, to our knowledge, that combines the three elements. Finally, this paper is unique in the analysis of the experimental data in the sense that it provides a modelling framework to determine the main factors affecting consumers' WTP.\n2. Experimental auctions as an instrument to measure the willingness to pay: The estimation of WTP for a food product has generated an extensive literature. In general terms, four alternative methods have been used: hedonic prices, contingent valuation, conjoint analysis - choice experiments, and experimental auctions. The last method values the attributes of a good in terms of consumers' revealed preferences, while the first three methods are generally based on stated preferences2.During the last decade, the use of experimental auctions (EA) has gained recognition among economists as a tool for valuation of private and public goods. The auctions are defined as a market institution for determining prices and assigning goods. Auctions have a set of rules that determine, according to the bids presented by the participating bidders, the winner of the auctioned good and the price to be paid (McAfee and McMillan, 1987). Experimental auctions try to simulate a real market situation in which the consumer makes the decision to buy and makes the purchase; thus offering to participants real products and allowing for exchange of real money. Hence, the auction participant can incur real costs if he or she deviates from an equilibrium strategy (Lusk and Shogren, 2007).Experimental auctions are said to elicit more accurate estimates of WTP than other methods because it is based on actual behaviour (not on individual intentions) involving real products and money (Lee and Hatcher, 2001). The EA:* permits the control of possible deviations from true values and strategic behaviour of participants;* obtains the WTP of each individual, which enables us to make parametric assumptions regarding the form of the demand curves of the market;* allows the modelling of the determinants of the WTP; and* enables the participant to incorporate the \"feedback\" from the experimental auction regarding his or her bids, just like in the real market (Lee and Hatcher, 2001; Lusk and Hudson, 2004).Despite these interesting advantages and almost half a century of empirical literature on the use of experimental auctions, various authors have noted some potential limitations. First, the implementation of auctions is a costly task that generates geographical and regional restrictions in the selection of subjects. At the same time, it has been shown that the amount of money participants receive (participation fee) as compensation for participation in the auction may generate biases in their bids. Finally, it is quite common to observe null bids, generally due to the lack of interest by the bidders for the auctioned good (Lusk and Hudson, 2004). In any case, many of these limitations can be overcome with a rigorous process of sample selection, recruiting and auction design and implementation, aspects, which we have taken into account in this study.\n3. Design and implementation of the experimental auction: 3.1 Sample, auctioned good and questionnaire\n4. Results: This section is divided into two sub-sections. In the first one, we gather the results from the experimental auction, paying special attention to the effect of the two control variables introduced: reference prices and additional information provided to participants. In the second part, we analyze the market potential of the auctioned product and the main factors that determine the participants' willingness-to-pay a price premium for MG.4.1 Effect of the presence of reference prices and the provision of additional information about MG characteristics on the willingness-to-pay for the MG\n5. Concluding remarks: In this study, we evaluated consumers' WTP for a typical local food product, \"Mongeta Ganxet of Valles Maresme\" (MG) and the factors that influence their WTP using an experimental auction in Barcelona. Results suggest that the market for MG is still relatively small considering that around half of the participants are not willing to pay a price premium for MG, despite acknowledging the organoleptic quality of the product. Furthermore, only 7 percent of participants are willing to pay a price higher than the minimum price for which the MG may be purchased directly from a producer or in the local market. This segment consists of middle to high-income consumers who have a high degree of knowledge about the product. Results also suggest that the psychographic characteristics of the participants are more relevant than their economic characteristics in influencing WTP. Moreover, as expected, the likelihood of purchasing the product increases with experience with the product. Results indicate that a better knowledge about the product and more experience in the consumption of MG increase the probability of paying a price premium for such product. This result reinforces the necessity of developing promotional campaigns targeted at specific segments of the population.Interestingly, price and convenience attributes negatively affect the probability of an individual paying more for MG. In relation to the first attribute, while in many studies convenience plays a positive role (or, at the worst, no significant role) in explaining the WTP for a high-value food product, in this study the relationship is negative, perhaps due to the cooking time required for the product (which is around two hours with the need for constant inspection). As per the price, while in many studies prices of high-valued food products lie between one or two times the price of the conventional counterparts, the price of MG is around seven times more expensive than the price of conventional white beans. Hence, careful communication and promotion strategies should be designed when explaining the reasons behind such price differentials. Further research is also needed to improve the convenience attribute of the product as this will limit its potential to capture an increasing share of the market.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to analyze the market potential of a locally produced and high quality food product (i.e. white bean \"Mongeta Ganxet\" (MG)) from Catalonia, Spain. Consumers' willingness to pay (WTP) for the product is elicited using a non-hypothetical economic experiment and then the sensitivity of WTP values is analyzed with regard to additional information provided to participants that includes reference prices, leaflets and tasting. Finally, a sample-selection model is estimated to assess the factors that significantly affect consumers' WTP for the product.\nDesign/methodology/approach: - To elicit the willingness to pay a premium for the local food product, 90 subjects were invited to participate in a random nth price experimental auction. Before auctioning the product, participants were asked to complete a questionnaire about consumer attitudes and purchase habits.\nFindings: - Only 7 percent of participants are willing to pay a price higher than the minimum price from which the MG product may be purchased directly from a producer or the local producing market. Also, it was found that providing reference prices of substitute products (both conventional and alternative high quality white beans) positively affects WTP for the MG. Interestingly, however, provision of information on organoleptic, nutritional and cooking characteristics of the MG and product tasting do not significantly influence WTP. Finally, the level of consumption, the degree of knowledge about the product and the participants' previous experience increase the purchasing intention for the MG.\nOriginality/value: - In contrast to traditional market research studies that use hypothetical methods, the authors use non-hypothetical elicitation mechanism to elicit consumers' valuation for an important local food product in Catalonia, Spain. Since actual products and cash were used in the experiments, the authors provided the respondents with an incentive to reveal their true preferences and valuation for the product of interest. The authors also tested the effect of provision of certain types of information on WTP and analyzed the factors that significantly influence consumers' WTP for the product.\n",
"role": "assistant"
}
] |
The effects of adjacent competitors and promotion on brand sales
|
[
"Context effects",
"Consumer choice",
"Promotion",
"In-store merchandising"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this issue.\n1. Introduction: As one of the many decisions falling under the heading of assortment planning, category management is one of the most difficult decisions retailers make (Mantrala et al., 2009). Retailers must decide product variety, depth and service quality at multiple levels (e.g. store, department, category and stock-keeping unit - SKU). Limited shelf space makes these decisions critical, as shelf space allocation has not only been directly linked to retailer profits (Dreze et al., 1994; Rooderkerk et al., 2013; van Nierop et al., 2008) but also has implications for manufacturers and even consumers. Shelf allocation, while mainly determined by the retailer, is essentially a negotiation between the retailer and manufacturers. Retailers work to optimize financial outcomes based on specific assortments, while manufacturers pay slotting fees to obtain optimum shelf locations to maximize brand sales and profits (Desiraju, 2001). The outcome of such negotiations is a specific allocation of brands on a shelf, and the allocation presents the consumer with a specific choice context.\n2. Theoretical background and hypotheses: Shelf allocation has been linked to unit sales since the early 1970s (Cox, 1970; Curhan, 1972) and retailer (store) profitability since the early 1980s (Corstjens and Doyle, 1981, 1983). The basic findings from these early studies indicate that the number of facings (amount of shelf space) influences sales of both specific units (i.e. a specific stock keeping unit or SKU) and the overall category. Beginning in the early 1990s, studies applied field experiments and scanner data to refine allocation models to include physical shelf location for items (Campo et al., 2000; Dreze et al., 1994), leading to many of the \"rules of thumb\" retailers still use today. For example, Dreze et al. (1994) found that location has a large impact on sales, concluding that in most categories, products perform best when located on the shelf at eye level.\n3. Methodology: To test the hypotheses proposed in this paper, we focus on a relatively frequently purchased category, in which consumers may be uncertain about how to evaluate alternatives and may not have strong brand preferences - wine. We use 24 weeks of A.C. Nielsen scanner data from 50 stores of a regional grocery retailer. The study centers around 21 SKUs from 13 well-known and widely-distributed wine brands, which we do not name to preserve confidentiality. These select SKUs form a small submarket of Australian (65 per cent) and Californian (35 per cent) wines that are considered to be very close competitors. The market leader is determined as the brand with the highest market share across all wine varietals in the dataset. It is also the market leader within its varietals. The average number of units sold by SKU per week is 10.23, at an average retail price of $8.29. The wines are available in two bottle sizes, 750 mL (82 per cent) and 1.5 L (18 per cent), and all of the wines are sold in glass containers. The wines consist of four varietals:\n4. Analysis: Our model relates weekly sales as a function of traditional store-level scanner-data variables and adjacency variables. We use a standard log-log functional form to explain unit sales:\n5. Results: The estimated coefficients are summarized in Table III. Traditional marketing-mix variables influence sales as expected: price is negatively related to sales (b\n6. Discussion: In this paper, we examined how sales of a focal brand are influenced by adjacent brands and their marketing activities. Extending previous research, our approach suggested potential interactions between the actual shelf location and marketing activities of adjacent brands using consumer choice context as a foundation for explaining expected interactions. As such, our study is an initial step in determining the interdependency of actual shelf allocation and marketing activities on consumer purchase behavior. Our research has several implications for both researchers and managers.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to examine the potential for adjacent brands to influence sales of a focal brand. Specifically, this paper examines whether the type of adjacent brand (market leader or non-market leader competitor) and its promotion (featuring or discounting) have an effect on sales of a focal product.\nDesign/methodology/approach: Store-level scanner data supplemented with in-store audits are used.\nFindings: It is demonstrated that adjacent brands and their specific marketing activities influence focal brand sales in specific ways, and that market leaders have a disproportionate and different effect than other competitors on the sales of adjacent brands.\nOriginality/value: Recent marketing research has suggested that brand activities such as featuring and discounting and in-store environmental stimuli such as shelf allocation and displays can have an important effect on category and individual brand sales. Prior work in this area, however, has not explicitly considered the potential influence of relative shelf positions (i.e. adjacencies) on brand and category sales.\n",
"role": "assistant"
}
] |
Time wasting behaviour: how to identify and address it as the team leader
|
[
"Training",
"Management development",
"Leadership development"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Successful people, whether in business, sport, or life in general, differ from others in one key respect. They not only know something, they do something with that knowledge to help them succeed. According to research conducted for The Telegraph in 2015, the average worker wastes 60 hours a month, with the primary distractions being chatty colleagues and phone- or internet-related distractions (Huth, 2015). Management, inefficient use of technology, and low skill levels are cited as key reasons for low productivity rates in the United Kingdom (UK) (Bloom, 2015). According to Forbes, this epidemic is getting worse (Connor, 2015). Everyone engages in some time-wasting behavior, but when does it seriously get in the way of good outcomes and productivity? Time-wasting behavior is habitual and often occurs because \"it's the way we've always done it\" or \"it gets the job done\". Could things be done better and quicker though? Here are six of the most common time-wasting habitual behaviors, and some practical suggestions on how to stop them in your team.\n1. Thinking or talking about things which you cannot influence or control: If you were to monitor it, how much of your team's energy is channeled into topics which come into this category? For every second spent thinking or talking about something they cannot control, they are taking time away from things which they can do something about.\n2. Going over, in your head or with others, conversations or situations that have already happened: Our brain has a tendency to bring up unsatisfactory situations repeatedly and make us remember them.\n3. Doing activity without a clear outcome: Unhelpful habits creep in over time and stop people from using their time well. Typical activity that has no clear outcome includes checking your phone or email every five minutes. When you are not clear on your purpose, your brain just gives you habits as the next best thing. People who procrastinate do this a lot, as their brain \"freezes\" and diverts to some habitual activity that makes them, temporarily, feel better.\n4. Having too many open loops at one time: Open loops are ideas, activities, and decisions that people have started or opened but have not completed or closed (McLachlan and Meager, 2017). The minute you think about something you open a loop.\n5. Over-thinking without taking action: If something runs around your head without taking you forward, then this is potentially a waste of time and energy. Over-thinking is often a product of believing that it takes big steps to move something forward and wanting to get the big step just right. Over-thinking rarely solves any problem, it just runs the same issue around and around in the hope of finding a solution.\n6. Believing \"perfect\" exists: Perfectionism is the enemy of success. Nothing is perfect, no relationship, job, or piece of work. When you talk about perfection, what do you mean? The definition is important.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to help leaders and managers understand what is going on behind time-wasting behaviors they see in their teams and support them in making productive changes.\nDesign/methodology/approach: Taken from Time Mastery: Banish Time Management Forever by Karen Meager and John McLachlan. This study uses an integrative psychology approach drawing on modalities such as transactional analysis, neuro-linguistic programming (NLP), cognitive behavioral theory (CBT) and traditional leadership models, and integrating those with the authors' practical experience working in organizations over the past 15 years.\nFindings: Team members' time management issues can be addressed if leaders take a more holistic view of their behavior. Traditional time management approaches do not work for everyone, so leaders need to tailor their style to adapt. There are some common time-wasting behaviors that are easily addressed, once identified.\nOriginality/value: Taken from the newly published Time Mastery: Banish Time Management Forever by Karen Meager and John McLachlan, this study combines the psychology of how we process, communicate, and use time from a number of psychological methodologies and combines them with practical organizational case studies and experience.\n",
"role": "assistant"
}
] |
Concerns about change and employee wellbeing: the moderating role of social support
|
[
"Work engagement",
"Burnout",
"Social support",
"Concerns about change"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Within the contemporary organizational change literature, one of the topic of interest, which focuses on personal employees' attitudes and reactions to change (Elias, 2009; Vakola and Nikolaou, 2005; Wanberg and Banas, 2000), has highlighted the stressful nature of organizational change and the factors underpinning for change success. Moreover, Isabella (1990) highlighted that the cognitive responses to the organizational change (perception about workload, autonomy, uncertainty, quality of communication etc.) evolve through a series of stages, labeled anticipation, confirmation, culmination and aftermath.\n2. Theoretical background: 2.1 Conceptualizing concerns about change\n3. Method: 3.1 Background and data collection\n4. Data analysis: Data analyses were performed using SPSS Statistics 24.\n5. Results: As emerged from correlations (see Table I), job resources and concerns about change correlated in the expected directions with all the outcome variables. CONT did not significantly correlate with WE. Consequently, we did not insert CONT in the regression models, including WE as a dependent variable.\n6. Discussion: The general aim of this study was to increase the extant knowledge about the consequences of the organizational change process on employee wellbeing, adopting a micro-level approach. Within this perspective, this study makes an original contribution by examining an understudied reaction to organizational change, namely, concerns about change evaluated during the anticipation stage, and which have been conceptualized as the anticipation of resource loss owing to future organizational change implementation.\n",
"role": "user"
},
{
"content": "Purpose: Concerns about change, a measure of the perception about future losses owing to organizational change, have received scarce attention within the organizational change literature. This study aims to address some relevant questions still unexplored regarding, the relationship between concerns about change and employees' burnout and work engagement. Moreover, it evaluates the buffering role of social support namely, that of colleagues and superiors, during the anticipation stage of an organizational change process.\nDesign/methodology/approach: Six hundred and thirty-two employees of an administrative public sector filled out a self-reporting questionnaire. Data analysis was performed using hierarchical moderated regression to show direct and moderating effects.\nFindings: Results suggest that concerns about change, measured during the anticipation stage of an organizational change process, relate to higher burnout and lower work engagement. Social support significantly affects the relationship between concerns and outcomes.\nOriginality/value: Overall, the study shows the role exerted from concerns about change in affecting employees' wellbeing as the early stage of the organizational change process, providing scholars and practitioners in human resources management with new insight regarding the importance of support from colleagues and supervisor to sustain successful change implementation and employees' wellbeing.\n",
"role": "assistant"
}
] |
Evaluation of third-party logistics (3PL) providers by using a two-phase AHP and TOPSIS methodology
|
[
"Distribution management",
"Analytical hierarchy process",
"Decision making"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: In today's environment, organizations are searching for new business practices and solutions that can provide them with success. Outsourcing is one of these business practices that can lead to greater competitiveness (Embleton and Wright, 1998). Outsourcing is simply the transferring of an internal service function to an external organization that otherwise would be difficult to acquire, or costly to have in-house (Ketler and Walstrom, 1993; Razzaque and Sheng, 1998). In the recent past, logistics outsourcing or third-party logistics (3PL) has received considerable attention from logistics researchers. Three research areas for 3PL have attracted interest among the researchers (Boyson et al., 1999): first, the primary reasons for outsourcing logistics functions; second, measuring the contribution of logistics outsourcing to a firm's capabilities; and lastly the selection of 3PL provider for outsourcing relationship. This paper focuses on the last theme.3PL involves the use of external companies for the management of some or all of the firm's logistics functions that have traditionally been performed within an organization. These functions performed by the third-party can encompass the entire logistics process or selected activities within that process (Lieb et al., 1993; Millen et al., 1997; Lieb and Miller, 2002; Lieb and Bentz, 2004, 2005). 3PL providers have specific logistics core competencies, and they can manage logistics processes more effectively and efficiently than their partners. Therefore, outsourcing logistics activities to specialized 3PL providers has become a rapidly expanding source of logistics cost savings, competitive advantage, and customer service improvements (Boyson et al., 1999; Gunasekaran, 2002). These benefits can also help reduce the need for logistics related capital investments in facilities, equipment, manpower, and information technology (Wang et al., 2006).From the above, an appropriate 3PL provider is strategically vital to companies that wish to concentrate on core competencies that are critical to competitive advantage, and leaving the rest to specialized firms (Bhatnagar et al., 1999; Troyer and Cooper, 1995). Practitioners have realized this importance by spending time and resources on these supporting business practices. However, many other firms have failed because of complexity and incompatibility in 3PL relationships, fear of loss of functional control, and uncertainties about the service levels to be provided by 3PL providers (Boyson et al., 1999). As a consequence, in some cases, 3PL has become a source of corporate failure and disappointment. Hence, in this paper, the evaluation of 3PL provider problem is handled.3PL provider selection can be viewed as a complex multi-criteria decision-making (MCDM) problem due to the availability of quantitative, qualitative, and multiple criteria that have to be considered in the decision process. The paper is organized in four sections. First of all, the evaluation framework for 3PL provider selection is detailed. The proposed methodology procedure is applied next. This is realized by adopting a modified Delphi method with the cooperation of multiple experts to establish an evaluation criteria hierarchy. Analytic hierarchy process (AHP) and the technique for order preference by similarity to ideal solution (TOPSIS) methodology are then employed to support the whole benchmarking process. In other words, AHP is applied to determine the relative weights of evaluation criteria and TOPSIS is applied to rank the potential 3PL providers. The Section 3 describes briefly the application of proposed framework to 3PL provider selection. In this section, an actual case example is presented to show the effectiveness of the proposed approach. Also, a sensitivity analysis is made to discuss and explain the study results. Finally, Section 4 contains some concluding remarks and perspectives. The proposed benchmarking framework in this paper can help the case company to identify its strengths and weaknesses and then select the best partner for 3PL outsourcing relationship. Therefore, the aim of this study is to propose a MCDM approach to evaluate the 3PL providers in respect to the experts' preferences order. By proposing a MCDM model, this paper will not only assist managers in making better decisions in evaluating decision criteria, but will also help them to select the best 3PL provider by using a two-phase AHP and TOPSIS methodology.\n2 An evaluation framework for 3PL provider selection: Many researchers have conducted studies to select the suitable 3PL provider in logistics process using mathematical programming models, different MCDM methods, and intelligent algorithms. Min and Joo (2006) aimed to illustrate the usefulness of data envelopment analysis (DEA) for measuring and benchmarking the operational efficiency of 3PL providers. Zhou et al. (2008) proposed DEA to measure the competitiveness of ten leading 3PL providers in China. Gol and Catay (2007) were the first to use AHP procedure to aid in the selection of the right global logistics service provider in a Turkish automotive company. Jharkharia and Shankar (2007) developed an analytic network process (ANP) model for the selection of an appropriate 3PL provider. Meade and Sarkis (2002) proposed an ANP model for selecting and evaluating third-party reverse logistics providers. In a different study, a hybrid approach of interpretive structural model and ANP for a proper selection of 3PL provider was developed by Thakkar et al. (2005). Similarly, Buyukozkan et al. (2008) proposed a hybrid model to select the most suitable strategic alliance partner in logistics value chain using fuzzy AHP and fuzzy TOPSIS approaches. In another fuzzy logic based study, a fuzzy TOPSIS approach was developed for the selection and ranking of the best 3PL provider (Bottani and Rizzi, 2006). Intelligent techniques such as case-based reasoning in fuzzy environment were also proposed by Yan et al. (2003) and Isiklar et al. (2007).As described above, there are many MCDM methods, however, the critical importance of capturing and combining a group of managers' opinions with an integrated MCDM methodology has not been highlighted in previous research in this field. Thus, there is a need for a combined decision-making method to guide managers in taking a proper decision about 3PL provider selection. This is considered in this paper using a two-phase MCDM method, namely the AHP and TOPSIS methodology. Saaty (1994) claimed that AHP has some advantages. First, it helps to decompose an unstructured problem into a reliable hierarchic structure that includes various criteria, sub-criteria, and alternatives to determine the best choice. Second, it can elicit judgments from decision makers to determine weights of the elements. Third, it uses an admissible consistency ratio (CR) to validate the consistency of the decision-making process. Besides, we can apply TOPSIS method to complement AHP method for this kind of real-life problem. TOPSIS method is rational, understandable and easily programmable computation procedure. With the integrated application of AHP and TOPSIS methods, evaluation process has advantages of the two methods and can never take into account the insufficiencies of each method. Therefore, a more realistic approach may be to use of an integrated AHP and TOPSIS methodology instead of other MCDM methods in the decision of 3PL provider selection. The evaluation procedure of this methodology is shown in Figure 1. The detailed descriptions of the major steps are given in the following subsections.2.1 Modified Delphi method\n3 The application of proposed framework to 3PL provider selection: The problem is the evaluation of the potential 3PL providers and selection of the best one. For this reason, first of all, evaluation criteria must be determined to evaluate potential five 3PL providers by using modified Delphi method. Second, a two-phase AHP and TOPSIS methodology is developed to realize the evaluation. For this purpose, the weights that are gained from AHP calculations are considered and used in TOPSIS calculations. It must be emphasized that the weights of AHP is obtained from modified Delphi method and these weights are then used as TOPSIS inputs. Then TOPSIS is operated for the evaluation problem and the final ranking of the preference order for the potential five 3PL providers are gained (Gumus, 2009). At the end of this section, the results of the proposed methodology and the changes in the weights of different main criteria are analyzed in detail by sensitivity analysis.3.1 An actual case example\n4 Conclusion: In recent years, it appears that a particular emphasis has been placed by researchers on the problems of 3PL provider selection. Thus, the objective of this study is to analyze the potential alternatives of 3PL providers, and to choose the best candidate by using a MCDM approach. The development of the proposed benchmarking framework is based on a two-phase AHP and TOPSIS methodology. For this purpose, this paper prioritizes and then ranks the five 3PL providers on the basis of incorporated AHP and TOPSIS methodology. To show the potential of the methodology, an actual case example is presented. A group of experts from the case company participated in the decision-making group to identify evaluation criteria, and to establish evaluation criteria hierarchy. Afterwards, the model captured the judgments of these experts by using modified Delphi method and then criteria weights were computed by using AHP based on experts' judgments. Then TOPSIS was employed to see the results of a two-phase AHP and TOPSIS methodology.This paper summarizes several major findings of this benchmarking study and develops practical guidelines for selecting the best 3PL provider for the company XYZ. Company XYZ adopted our proposed benchmarking framework to logically deal with that complicated evaluation problem and selected A1 as the best 3PL provider. Then, the results obtained from the integrated approach were discussed with the managers of the case company and found to be in accordance with company's mission, objectives, plans and policies. According to them, the major advantage of this approach is that it assisted them think in a comprehensive way. Similarly, those managers agreed that the model would help in examining the strengths and weaknesses of the potential 3PL providers by benchmarking them with respect to evaluation criteria, and sub-criteria. Sensitivity analysis was also performed to investigate the sensitivity of the results to changes in the weights of main criteria. This is a particularly important aspect of this methodology, since results are based on experts' assessments. Therefore, managers can use these different weight combination results in managing, benchmarking, and continuously improving their 3PL provider selection decisions.Use of a two-phase AHP and TOPSIS methodology offers a number of benefits. First, it is a systematic and reliable method since it is capable of capturing an expert's opinions when complex MCDM problems are considered. Thus, the use of AHP weights in TOPSIS makes the benchmarking process more rational and realistic. Because of this ability, managers can use this method in making their strategic decisions. In other words, it serves as a guideline to the managers in outsourcing related benchmarking decisions. Second, it considers the hierarchical structure, pairwise comparisons and consistency checks in the evaluation process. Therefore, it can be said that it is an excellent tool to handle quantitative and qualitative assessments about 3PL provider evaluation problem, and its calculations are faster than other mathematical techniques such as DEA, ANP, intelligent and fuzzy logic based approaches. Finally, the combined AHP and TOPSIS method is very flexible and suitable for various decision situations.Although the proposed model provides a comprehensive framework to guide the managers of the case company, there are some limitations of the model. First, the proposed method did not consider all possible criteria that could be added to the model. The criteria for 3PL provider selection may include business experience, geographic location, reliability, reputation, delivery time, IT infrastructure, training systems, and customer service, etc. Therefore, the ranking of the alternatives obtained by using the AHP and TOPSIS methodology may change if a new criterion is added. Besides, the additional factors require the additional time and effort necessary for completion of such a model. In this case, the formation of the pairwise comparison matrices would be a time-consuming and complex task. Second, the model is very dependent on the weightings provided by the decision makers. Therefore, its efficacy depends on the accuracy and the value of judgment provided by the decision makers. Also, the full involvement of the relevant managers from the same organization would help to utilize their experience and expertise in a decision-making process. On the other hand, the proposed method has been effectively used in the decision of 3PL evaluation problem. In addition, the proposed benchmarking framework, with minor modifications, can be useful to all firms in their 3PL provider selection decisions. It is anticipated that a further study will improve the method to deal with other management problems. And also it could be extended with the other outranking methods such as PROMETHEE, ORESTE, and ELECTRE techniques, and heuristic-based methods in solving such a problem. These methods have been recently developed to use in a business environment. Further research may be the application of these methods to the 3PL provider evaluation problem and the comparison of the results.\n",
"role": "user"
},
{
"content": "Purpose: - Third-party logistics (3PL) provider selection has gained great attention in logistics management literature. The purpose of this paper is to provide a good insight into the use of a-two-phase analytical hierarchy process (AHP) and technique for order preference by similarity to ideal solution (TOPSIS) approach that is a multi-criteria decision-making methodology in the evaluation of 3PL providers.\nDesign/methodology/approach: - In this paper, after the selection criteria of 3PL providers are determined by modified Delphi method, the weights of criteria have been calculated by applying the AHP method. The TOPSIS method is then employed to achieve the final ranking results. And an actual case example is presented to clarify the methodology. Sensitivity analysis is also given to demonstrate how sensitive the proposed model is to changes in the weights of different main criteria.\nFindings: - This model provides decision makers with a simple, flexible, and easy-to-use approach to evaluate potential 3PL providers efficiently. Findings demonstrate that the proposed benchmarking framework, with minor modifications, can be useful to all firms in their 3PL provider selection decisions.\nResearch limitations/implications: - A two-phase AHP and TOPSIS methodology is very flexible and suitable for various decision situations. However, selection of the appropriate 3PL provider requires consideration of multiple alternative solutions and evaluation criteria because of complex methodology, and hence it increases the effort.\nOriginality/value: - This is probably the first time that an attempt has been made to apply the modified Delphi method, AHP and TOPSIS methodology in the decision of 3PL provider selection in a Turkish automotive supplier company. This is the most powerful motivation to consider this problem. In addition, the paper is especially of interest to managers as they make decisions on which criteria should be considered in the evaluation process and how a decision model should be structured by using a two-phase AHP and TOPSIS methodology.\n",
"role": "assistant"
}
] |
The kaleidoscope of disciplinarity
|
[
"Publishing",
"Epistemology",
"Documentation"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The Oxford English Dictionary traces the etymology of \"discipline\" as the antithesis of \"doctrine.\" Discipline pertains to practice or experience in the context of disciples or scholars, whereas doctrine (the property of the doctor or teacher) is more concerned with abstract theory. In present-day academia, it is more difficult to find a consensus on the nature of disciplines. Disciplines are often considered ephemeral abstractions, coalesced around some combination of socio-cognitive, institutional, and technological entities. Basically, disciplines are the stuff of knowledge, people, and the things with which they interact. Physics, sociology, and history are probably disciplines. Cupcakes, true love, and fraternities probably are not. The jury is still out on entrepreneurship, computational linguistics, and digital humanities. The question, of course, is why.\nObjective and approach: The objective of this paper is not to provide a prescriptive list of elements that are necessary components of disciplinarity, nor is it to recommend a measurement for identifying the birth, maturation, and interaction of disciplines. Rather, we aim to provide examples of the ways in which authors have conceptualized and operationalized disciplinarity, exploring their incongruencies and interrelations. It is also to highlight the necessity for scholars to think in a holistic way about conceptualizations and operationalizations, especially those which precede measurements and metrics.\nConceptualizations: Scholars from a wide range of disciplinary perspectives have weighed in on conceptualizations of disciplinarity. We agree with Messer-Davidow et al. (1993) that disciplinarity is in some ways a \"problematic neologism\" as it is often conflated with epistemology, sociology of knowledge and the history of ideas (p. 1). We focus here on the nature, criteria and \"possibility conditions of disciplines\" (Messer-Davidow et al., 1993, p. 2) and aggregate themes from scholars who have sought to explicitly define and understand what contributes to and constitutes a discipline as an entity.\nNarratives: Once an area of study is seen to have reached a certain point of maturation, it is fairly common for the history of the discipline to be documented in some fashion. These disciplinary histories provide implicit notions of the criteria necessary to achieve disciplinarity: the focus on certain elements demonstrates the activities that are involved in the codification and maturation of a discipline. Although some narratives are used to show that a discipline does not exist (Amariglio et al., 1993), examining a range of disciplinary histories can reveal shared characteristics that are common in defining disciplines.\nMeasurements: Disciplinarity itself became the subject of academic study primarily within the context of interdisciplinarity beginning in the late 1960s. Interdisciplinarity has remained a popular topic, especially in quantitative analyses of scholarship such as scientometrics, which has led to a new surge in disciplinary definitions. A fundamental feature of quantitative analyses is explicit operationalizations of concepts, which in many cases include measurements of interdisciplinarity. Encoded within these measurements are implicit definitions of disciplines. These studies rarely define disciplines explicitly, and when they do, those definitions are often at odds with their operationalizations.\nDiscussion and future research: Convergence\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to identify criteria for and definitions of disciplinarity, and how they differ between different types of literature.\nDesign/methodology/approach: This synthesis is achieved through a purposive review of three types of literature: explicit conceptualizations of disciplinarity; narrative histories of disciplines; and operationalizations of disciplinarity.\nFindings: Each angle of discussing disciplinarity presents distinct criteria. However, there are a few common axes upon which conceptualizations, disciplinary narratives, and measurements revolve: communication, social features, topical coherence, and institutions.\nOriginality/value: There is considerable ambiguity in the concept of a discipline. This is of particular concern in a heightened assessment culture, where decisions about funding and resource allocation are often discipline-dependent (or focussed exclusively on interdisciplinary endeavors). This work explores the varied nature of disciplinarity and, through synthesis of the literature, presents a framework of criteria that can be used to guide science policy makers, scientometricians, administrators, and others interested in defining, constructing, and evaluating disciplines.\n",
"role": "assistant"
}
] |
A canonical analysis of intentional information security breaches by insiders
|
[
"Data security",
"Behaviour",
"Motivation (psychology)"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: Devastating cases of intentional information security breaches by trusted insiders occur on a regular basis (Herbig and Wiskoff, 2002). However, it is generally not in the interests of organizations to report information security breaches. As a result, relatively little is known about the conditions, which foster such events, and less is known about the backgrounds of the individuals involved (Crawford and Bosshardt, 1993). This is rather unfortunate; such information could be useful in predicting and preventing future occurrences.Thus, the purpose of this research is to examine information security breaches by trusted end-users that are intentional, malicious acts. In particular, this study focuses on actions such as information technology (IT) espionage and IT sabotage, and identifies the attributes of individuals who commit such behaviors. Care was taken to select characteristics which are readily identifiable by astute managers or supervisors.Within the information security context, espionage activities involve the use of organizational information systems by insiders for the retrieval of data, which is stolen and sold to others, to the detriment of the organization (Band et al., 2006). Sabotage involves the destruction of organizational data and information systems (Cappelli et al., 2006). These criminal activities may be committed for a number of reasons, including retribution for firing or dismissal, to please a spouse or a friend, because one's judgment has been impaired by controlled substances such as drugs or alcohol, or to prevent or alleviate a financial crisis (Knapp et al., 2006; Giacalone, 1987).To assist in the prediction and deterrence of future internal information security breaches, the relationships between individuals' backgrounds and acts of espionage and sabotage are assessed. A canonical analysis, using 62 cases of intentional information security breaches by insiders, is conducted. The canonical analysis technique is utilized because relatively little information exists in this topic area. Canonical analysis provides an opportunity to explore relationships among sets of independent and dependent variables with a minimum of assumptions. The outcome variables include acts of sabotage and acts of espionage, as they pertain to organizational information systems. The predictor variables include financial changes, relationship strains, substance abuse, and job changes. The results of the analysis indicate that a significant latent structure exists among the independent and dependent variables. Instances of IT sabotage were closely correlated with job changes, substance abuse, and relationship issues, while espionage was highly correlated with financial changes or demands. Finally, there was little correlation among the dependent variables.This paper is organized as follows: the following section provides basic background information, including a discussion of intentional security breaches and a description of the predictor variables used in the study. Section 3 describes the sample population, measures, and analysis. Next, the results of the canonical analysis are presented and interpreted. Finally, concluding comments are made.\n2 Background: This literature review is not intended to provide an exhaustive look at all materials related to intentional security breaches, but to lay a foundation by treating the most common concepts and terminology used within this study. First, the basics of endpoint security will be addressed. Next, the most serious actions associated with intentional security breaches - espionage and sabotage - will be explored. Four predictors, financial changes, relationship issues, substance abuse, and job changes will then be described and associated hypotheses will be made.2.1 Endpoint security\n3 Methodology: To test the relationships among IT sabotage, IT espionage, and the predictors, 62 cases of intentional information security breaches were reviewed and analyzed using canonical analysis. Canonical analysis is a multivariate regression technique for testing the relationships between groups of dependent and independent variables. The results of this analysis will indicate which predictors correlate with IT sabotage and which predictors correlate with IT espionage.3.1 Sample population\n4 Analysis: This analysis used a statistical technique, known as canonical analysis, to test the relationships between the outcome variables, sabotage and espionage, and the predictor variables, financial changes, relationship strain, employment changes, and substance abuse. This type of statistical test will indicate which predictors (if any) are significantly related to sabotage and which are related to espionage. The SPSS software package was used for this analysis.As a precursor to conducting this test, it is necessary to ensure that the sample size is sufficiently large. As a general rule, the minimum sample size for canonical analysis is ten cases per variable (Hair et al., 1998). Because six variables are included in the analysis and the sample size is 62, it was determined that sufficient cases exist to perform canonical analysis.The SPSS software was configured to develop two canonical functions; one for sabotage, and one for espionage. For those less familiar with canonical analysis, canonical functions may be construed as something similar to regression equations.Although the testing of canonical functions differs from the testing of regression equations, the principal of correlation among dependent and independent variables remains. Several tests of correlation, including F tests, Wilk's lambda, Pillai's criterion, Hotelling's trace, and Roy's greatest characteristic root (GCR) were used. As indicated in Table II, both functions were significant. This means that there is correlation between the predictor variables (financial changes, relationship strain, employment changes, and substance abuse) and the dependent variables (sabotage and espionage).The following steps in this analysis concern the validity of the functions and the relative significance of each the individual predictors.Table III depicts the redundancy indices for the independent and dependent variates of the first function. As shown, the redundancy index variate is of a modest size (0.407). However, the independent variate is not quite as substantial (0.212). This appears to be due to a strong delineation between the dependent and independent variables, therefore this is not viewed as a significant issue. This low redundancy is the result of relatively low shared variance in the independent variate (0.340), not the canonical R2. The second function's redundancy analysis provides slightly different results. The canonical R2 is down to 0.240. In addition, in the second function, the shared variance declined slightly for the second set of variables (0.199 for the independent variate; Table IV).Having determined the number of canonical functions to include in the analysis, the canonical weights for each canonical variate are then considered (Table V). The magnitude of a canonical weight represents its relative contribution to the variate (Hair et al., 1998). The order of contribution of the canonical weights is based on the size of the weights.Because canonical weights may be unstable, especially in cases of multicollinearilty, an investigation of the canonical loadings and canonical cross-loadings is prudent. In the first variate, acts of sabotage has a strong loading (0.984). Acts of sabotage is highly related to job changes (0.923) and substance abuse (0.505), and marginally related to relationship changes (0.034). The only independent variable with which acts of sabotage is not related is financial changes (-0.294). An analysis of the second variate indicates that acts of espionage (0.814) is most highly related to financial changes (0.704), followed by relationship issues (0.109). Espionage was not found to be related to substance abuse or job changes.Besides, assessing the canonical weights and loadings, it is also necessary to assess the canonical cross-loadings. A canonical cross-loading is the correlation between an observed independent or dependent variable with the opposite variate (Hair et al., 1998). In regards to the first canonical function, Acts of sabotage is highly related to the independent canonical variate (0.777), while acts of espionage does not appear to be related (-0.459). This indicates a low-shared variance between the dependent variables. This pattern is replicated in the second canonical function.\n5 Results: Table VI summarizes the results of hypothesis testing. The results of the analysis indicate that significant structure exists among the predictor and outcome variables. However, there was relatively little shared variance among the dependent variables, acts of espionage and acts of sabotage. Furthermore, there were significant differences in the impact of the predictor variables. H1, H2, H3, and H4A were supported. Financial changes, followed by relationship changes, were highly correlated with acts of espionage. Sabotage behavior was most highly correlated with job changes, substance abuse, and relationship changes. Surprisingly, the predictor, job changes, was not correlated with IT espionage. Thus, H4B was rejected. Because this finding strongly contradicts earlier studies of corporate espionage, more research will be needed in order to conclusively state that job changes are not a significant predictor of IT espionage.Based on these findings, it is reasonable to assume that the warning signs for information system sabotage differ from those for espionage. It is also possible that different types of intentional information security breaches are driven by different motivating forces. For example, system sabotages may be driven by a need for revenge or retaliation, whereas the theft of organizational data may result from a desire for additional compensation.\n6 Implications for practice: Fortunately, it appears that diligent managers and supervisors have a number of opportunities to identify the warning signs of employees on the verge of committing information security breaches. This research explored four salient predictors of IT sabotage and IT espionage: financial changes, relationship changes, substance abuse, and job changes. These are issues which generally come up in the course of \"water-cooler\" conversation - informal discussions among coworkers. In many of the cases which were reviewed, the subject emitted a number of warning signs. Thus, it is recommended that the employer should follow up on such conveyances in a diligent manner.It should also be noted that not every unhappy employee will take actions to harm the company's information systems. Being overly suspicious and invasive can have detrimental effects. The key is to find a balance. It is recommended that managers should actively listen to what employees are saying about themselves and about each other, and ask questions in an informal, unobtrusive manner. Regarding individuals of genuine concern, the best course of action may be a referral to human resources. This action should be framed in such a light that the employee believes others are concerned for his or her wellbeing.Finally, controls for internal information security should be reviewed on a regular basis. Tools, such as group policy objects, authentications, and security analyzers should be used to minimize the threat of intentional breaches (Herzog and Shahmehri, 2007). In addition, a procedure closing out employee accounts and systems access should be developed and/or streamlined. This routine should be followed immediately after the termination, layoff, or resignation of every employee. Depending on the nature of the organization, it may be necessary to disable hardware for capturing and removing data, disabling flash drives, printers, disks, and other output media will increase the difficulty associated with data theft.\n7 Conclusion: Although rarely reported, intentional information security breaches by insiders seriously cripple organizational information systems. Often, the only means for minimizing such risks is continued vigilance on the part of managers and peers. This study presents a rudimentary examination at the darker side of user behavior - intentional information breaches - by studying the backgrounds and motivations of 62 insiders in an effort to identify potential warning signs. About 62 cases of intentional information security breaches by insiders were examined and a canonical analysis was conducted. Two major findings were identified. First, a significant relationship exists between financial hardship, relationship changes, and the theft and sale of proprietary data by insiders. Second, recent firings, instances of alcoholism, and relationship breakups are related to information system sabotage. Because the role of predictors vary for each behavior, and because the behaviors have little common variance, it will be necessary to study the motivations for each behavior separately. Future research should be directed toward the psychological underpinnings of different types of internal information security breaches.\n",
"role": "user"
},
{
"content": "Purpose: - The paper focuses on intentional information security breaches by insiders. The purpose is to assess the relationship between insiders' backgrounds and motivations and their deviant behaviors. Two outcome variables, information technology (IT) espionage and IT sabotage, are correlated with four predictors, financial changes, relationship strains, substance abuse, and job changes.\nDesign/methodology/approach: - Some 62 cases of intentional information security breaches by insiders are examined using canonical analysis.\nFindings: - The results indicate that a significant relationship exists between financial hardship, relationship strains, and the theft and sale of proprietary data by insiders; and recent firings, substance abuse, and relationship strains are related to information system sabotage.\nResearch limitations/implications: - Because little or no research has been conducted on this topic, there is a lack of validated measures for variables associated with information security. Thus, the measures used in this paper are necessarily simplistic. Because few organizations report information security weaknesses, the sample is relatively small.\nPractical implications: - In the majority of cases included in this paper, it is found that the insider convey a number of warning signs before committing the security breach. After reading this paper, diligent managers should be able to identify potential security breaches.\nOriginality/value: - This is one of the first studies to explore insider security breaches using canonical analysis.\n",
"role": "assistant"
}
] |
Impact of exchange rate movement and macro-economic factors on exports of software and services from India
|
[
"Performance",
"Strategic evaluation"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Indian software industry witnessed a phenomenal growth in the last decade. In 1998-1999 which was a mere USD4.6 billion industry, has now become a giant USD108 billion industry in FY2012-2013 according to the industry body NASSCOM (National Association of Software and Services Companies). As a proportion of national GDP, the sector revenues have grown from 1.2 percent in FY1998 to an estimated 7.5 percent in FY2012. Export revenues estimated to reach USD69.1 billion in FY2012 growing by over 16 percent as reported by NASSCOM. This sector is also helping India to improve its current account deficit (CAD) which is at record high by bringing in revenues in foreign currencies.\n2. Literature review: A study of published literatures from perspectives of flactuation of exchange rate and development of software industry in India throws some interesting viewpoints. Previous researchers' literatures have explored the relationships between the exchange rate flactuation and overall exports of India. The notable work in this context considered in this study are the works of Prakash and Sharma (2009), Veeramani (2008), Srinivasan and Wallack (2003). In the recent paper \"Impact of exchange rate movements on exports: an analysis of indian non-financial sector firms (2012)\" Cheung and Sengupta explored the relationship between exchange rate and variations and Non-Financial sector export.\n3. Modeling frameworks: Reserve Bank of India (RBI) has published export and import data of \"Software services\" separately from the first quarter, i.e. April-June quarter of financial year 2000-2001. So, data from Q1 (April-June) of FY 2000-2001 to Q4 (January-March) of FY 2011-2012 is taken. It comprises of data for 48 consecutive quarters. We have taken the growth rate, so the final data set consist of data of 47 quarters. The main source to obtain data are \"Handbook of Statistics on Indian Economy\" published by RBI and it is also available on their website. Apart from this data are also collected from the website \"www.tradingeconomics.com\" and World Bank official website. The data collected for export of software services, merchandise export, REER, USD-INR exchange rate, GDP of India and selected countries. The impacts of the explanatory variables are observed on the response variables software and merchandise export. Possible explanation of the observed impact is also given in the later sections with the analysis of the obtained results.\n4. Methodology: To find the relationship among response (dependent variable) and explanatory (independent variable) we have done the multiple regression analysis on the obtained time-series data. The research was totally based on the secondary data collected from reliable sources (\"Handbook of Statistics on Indian Economy\" published by RBI from 2000-2012). The simple linear regression technique is used to analysis the model:\n5. Empirical results and analysis: The descriptive statistics of the response and explanatory variables is shown in Table II.\n6. Conclusion and recommendations: Software export is heavily dependent on the external demand. Slow down in USA and Europe impact most on software industry as the foreign clients cut down their IT budget and decision making on IT spending also got delayed. Change in REER is also having some impact as it increase or decrease the earnings of the IT companies. But most of the companies hedge against the forex risk as their earnings is heavily dependent on exports. So, in the process they made some forex gains and losses but their earnings remained stable and we have witnessed a strong growth in export of software services in the last decade. The macro effect used in this study is GDP growth of India. It does not affect the software export much as more than 90 percent of the revenue of the software industry comes from outside India. Political scenarios in exporting countries affect this sector much more that the political scenario in India. The IT sector is heavily influenced by the external demands which in turn are impacted by the political scenarios in exporting countries and policy decisions by those governments. Whenever there is any talk of curbing outsourcing in the USA the stock prices of Indian IT companies take a hit. This sector is driven by the human capital which consists of knowledge workers, mainly the engineers and management graduates. So, issues like land acquisition, trade union strike does not affect this sector.\n",
"role": "user"
},
{
"content": "Purpose: The information technology (IT) sector in India is the leading exporter from the service sector domain and also is a significant contributor to the overall export kitty of India. The IT sector's contribution in total Indian exports (merchandise plus services) increased from less than 4 percent in FY1998-1999 to about 25 percent in FY2011-2012 as per IT industry nodal body National Association of Software and Services Companies and the central bank of the country, the Reserve Bank of India (RBI). As this industry earns most of its revenue in foreign currencies it is exposed to the foreign exchange risks. The purpose of this paper is to validate the macro-economic theory that depreciation in domestic currency boosts export as it makes domestic good and services cheaper and appreciation in domestic currency deters export as it makes domestic good and services costlier. The authors are validating this theory for Indian rupee and keeping software services export in the focus.\nDesign/methodology/approach: In this study the authors have done the multiple regression analysis on the obtained time-series data. The research was totally based on the secondary data from Quarter1 (April-June) of FY 2000-2001 to Quarter4 (January-March) of FY 2011-2012. It comprises of data for 48 consecutive quarters. The authors have taken the growth rate, so the final data set consist of data of 47 quarters. The main source of data are published data by RBI. Data have been collected for export of software services, merchandise export, real effective exchange rate, US-dollar-Indian rupee exchange rate, gross domestic product of India and selected countries.\nFindings: Data analysis leads the authors to the following findings: real effective exchange rate has no significant impact on software services export; US-dollar-Indian rupee exchange rate has no significant impact on software services export; external gross domestic product growth has no significant impact on software services export; and gross domestic product growth of India has no significant impact on software services export. The results obtained from multiple regression analysis are also supported by the results obtained from Granger Causality test. It does not identify any single factor as a major cause of software export. Results shows that the external GDP is having the statistically significant impact on the software export but the low value of R2 denotes that the impact is very low.\nOriginality/value: There are no published studies available which has attempted similar kind of an approach to study using aggregated export data and other macro-economic variables like real effective exchange rate (REER) and GDP growth rate. All previous literatures used REER to measure the impact of the exchange rate on export.\n",
"role": "assistant"
}
] |
Training the trainer: Optimal styles and learner behaviors
|
[
"Training",
"Trainee satisfaction",
"Learning goal orientation",
"Trainer directiveness"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nReview: What makes a good training course? This question feels relatively unasked given its importance, and there are few more important questions any human resources (HR) department or any other could pose. With an average budget in an organization of $1 million annually, there is a vast amount of resources that go into ensuring that the employees receive the best, most relevant training possible. As such, are we convinced that HR departments have the right information at their fingertips to assess just what makes the optimal training course for their people and organization?\nTrainer and trainee: While going to such lengths may not be practical for an average HR function, that is not to say that it could not find out key information and metrics on how effective their training courses are. Part of this is understanding current pedagogical thinking around this, and partly it is looking into areas such as management psychology to understand how their employees learn, whether it till be applied and shared, and what benefit this will have on the organization.\nJob satisfaction?: To test their theories, the authors posed three research questions which they sought answers for from a survey of business students and a large US university. They were:\nPractical implications: Outputs from this genre of research can be particularly valuable, and in this case, there are a number of current and future implications for HR functions and how they go about the arrangement of their training:\nComment: The article \"Do trainer style and learner orientation predict training outcomes?\" by Harris et al. (2014) offers genuine value to all HR managers about to reassess their training programs. It develops a method of analyzing not just the impact made on the employee but also the suitability of a trainer, which is a unique combination in an academic research article.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to review the latest management developments across the globe and pinpoint practical implications from cutting-edge research and case studies.\nDesign/methodology/approach: This briefing is prepared by an independent writer who adds their own impartial comments and places the articles in context.\nFindings: What makes a good training course? This question feels relatively unasked given its importance, and there are few more important questions any human resources (HR) department or any other could pose. With an average budget in an organization of $1 million annually, there is a vast amount of resources that go into ensuring that the employees receive the best, most relevant training possible. As such, are we convinced that HR departments have the right information at their fingertips to assess just what makes the optimal training course for their people and organization?\nPractical implications: The paper provides strategic insights and practical thinking that have influenced some of the world's leading organizations.\nOriginality/value: The briefing saves busy executives and researchers hours of reading time by selecting only the very best, most pertinent information and presenting it in a condensed and easy-to-digest format.\n",
"role": "assistant"
}
] |
Organizational identification, work engagement, and job satisfaction
|
[
"Social identity theory",
"Job satisfaction",
"Work engagement",
"Mediation",
"Organizational identification",
"Work behavior"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Organizational identification is an important concept in research into affective and behavioral outcomes among employees (Mael and Ashforth, 1992, 1995; van Dick, 2004), as it builds on he social identify tradition to describe a specific form of social identification. Although a high level of organizational identification can yield desirable results for both organizations and individuals (Boros, 2008), research on its relationship with work-related attitudes, such as job satisfaction, and on the mechanisms governing this relationship is in the very early stages of development. With job satisfaction being viewed by different stakeholders as a core indicator of overall organizational functioning and performance (Gresov et al., 1989), a better conceptual understanding of how an employee's bond with his or her broader organization is linked to job satisfaction, and a more comprehensive view of the determinants of job attitudes is called for. The aim of this paper is therefore to examine the relationship between organizational identification and job satisfaction and to propose an underlying mechanism for this relationship. Specifically, drawing from social identity theory, we develop a model whereby organizational identification is linked to job satisfaction through enhanced engagement with work. Empirical support for this model is offered, and its potential implications for research and practice are discussed.\nSocial identity, organizational identification, and job satisfaction: Social identity theory proposes that individuals' identities that stem from group membership are essential for their self-concept (Tajfel and Turner, 1979), which in turn impacts their affect, cognitions, and behavior. Social identity is defined as \"that part of the individual's self-concept which derives from his [sic] knowledge of his [sic] membership in a social group (or groups) together with the value and emotional significance attached to that membership\" (Tajfel, 1978, p. 63). Social identity theory provides the understanding that individuals can identify with a range of social categories based on, for example, gender, nationality, profession, and/or religion, among others, and that each or a combination of these can be the target of identification. The theory has helped to substantially advance our understanding of a range of phenomena related to social influence and group behavior including work-related activity.\nWork engagement, organizational identification, and job satisfaction: Social identity theory suggests that a strong bond between an employee and their organization reinforces their motivation to exert effort for (and on behalf of) their colleagues and their organization (Ashforth and Mael, 1989; Dutton et al., 1994). In turn, a stronger psychological bond with the organization can also reinforce willingness to perform better and to engage with work itself. Enhanced employee engagement can be both mental and physical, reflecting the attitudinal and behavioral elements of the concept. Bakker et al. (2008) described engagement as \"a positive, fulfilling affective-motivational state of work-related well-being that is characterized by vigor, dedication, and absorption\" (p. 187). Similarly, Kahn (1990) noted that \"in engagement, people employ and express themselves physically, cognitively and emotionally during role performance\" (p. 694). Although work engagement has been defined both attitudinally and behaviorally (Macey and Schneider, 2008) and there is still some debate as to whether it is best viewed as an attitude or a behavior (Newman et al., 2010) for the purposes of the present study and in line with the literature, we view engagement as an affective-motivational state. Therefore it can be expected that high identification with the organization will yield engagement with its goals (i.e. the specific work and one's job role). An employee's psychological bond (or lack of it) with their organization can potentially enhance or undermine engagement with their work.\nMethod: Participants and procedure\nResults: Table I presents the means (M), standard deviations (SD), and zero-order Pearson correlations (r) for the study variables and reliability coefficients (Cronbach's a) for all study variables. Consistent with H1, organizational identification (OID in the remainder of this section) was positively correlated with job satisfaction (JS, as above) (r=0.39, p\nDiscussion: Building on social identity theory and on the notion of organizational identification, we argued that employees who see membership of their organization as consistent with their personal values and as part of their self-definition tend to be more engaged and also more satisfied with their work. The analyses confirmed all three hypotheses. The results showed that organizational identification has a strong and positive effect on job satisfaction van Dick et al.'s (2008) study. Expanding on this, we argued and found that the effect of organizational identification on job satisfaction is transmitted through work engagement, and specifically through vigor and dedication. Employees who have a strong and positive bond with their organization are also highly engaged in their work, energized and dedicated to their work, deriving satisfaction from their job as a consequence. As Macey and Schneider (2008) assert, \"true identity with work reflects an 'authenticity' that results in employees connecting with work and addressing difficult issues (i.e. the engagement behavior) [...] It is from the experience of being psychologically present in the work - that the work is a part of one's identity - that employee development and productivity follow\" (p. 12). Our study substantiates this with empirical evidence.\nConclusions: The experience of being a member of a specific organization and prescribing to its values and goals is linked to employee work-related cognitions, attitudes, and behaviors, and specifically, their sense of being focussed, integrated, and connected (i.e. being engaged in their work). Grounded in social identity theory, this study sought to examine the role of employee engagement as an essential link between identification with an individual's organization and satisfaction with their job. A range of important theoretical and practical implications arise from the findings, offering avenues for intervention, research, and further conceptual development. One's psychological bond with their organization can undermine or enhance his or her engagement in and satisfaction with their work.\n",
"role": "user"
},
{
"content": "Purpose: Organizational identification refers to a person's sense of belonging within the organization in which they work. Despite the importance of organizational identification for work-related attitudes and organizational behavior, little research has directly examined the mechanisms that may link these. The purpose of this paper is to provide an understanding of how organizational identification relates to job satisfaction.\nDesign/methodology/approach: Adopting a social identity perspective, the authors present and test two models that describe work engagement and its constituent dimensions (vigor, dedication, absorption) as mediating the relationship between organizational identification and job satisfaction.\nFindings: Bootstrapped mediation analyses provided support for full mediation whereby there is an indirect (via work engagement) and positive effect of organizational identification on job satisfaction. Analyses also provided support for the mediating effects of the three dimensions of work engagement, vigor, dedication, and absorption, in this relationship.\nPractical implications: Although cross-sectional, this study provides a needed first step toward an understanding of the important role of organizational identification for job satisfaction and the mediating role of work engagement in this relationship.\nOriginality/value: The results provide valuable insights into the effects of organizational identification and address some of the gaps in understanding social identity as the context for work behaviors. Theoretical and practical implications for strengthening employee engagement and enhancing organizational identification are discussed.\n",
"role": "assistant"
}
] |
Differences between younger and older populations in nutrition label reading habits
|
[
"Nutrition",
"Labelling",
"Consumer behaviour",
"Croatia"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: It is well known that balanced and varied nutrition and physical activity can significantly contribute to the prevention of some chronic diseases linked with obesity and excess weight (World Health Organization, 2006). To prevent such illnesses, nutrition labelling is one of the major instruments for helping consumers to choose nutritionally appropriate food. However, the current European nutrition legislation does not require nutrition labelling as mandatory for all food packages, so findings in the European context may be biased towards consumer views in northern Europe, in particular in the UK (European Commission, 1990; Cowburn and Stockley, 2005). It is estimated that in the UK, around 80 per cent of pre-packaged foods provide nutrition labelling information, compared to 30 per cent in Greece (Cowburn and Stockley, 2005). On the other hand, there have been indications that nutrition labels may not be read at considerable level. Face-to-face interviews conducted among consumers in Germany, Denmark, Spain, Hungary and Poland found that nutrition information is read by only a small percentage of participants (Bureau Europeen des Unions de Consommateurs, 2005). Several systematic reviews have pointed out the problems in developing a European framework for action regarding how to improve nutrition labelling, when so little is known about consumer understanding and use of nutrition labelling in a broader European context, in particular, in southern Europe (Cowburn and Stockley, 2005; European Heart Network, 2003). More research is needed to explore peoples' motives to read nutrition labels, their understanding and suggestions for easier manipulation of that information in daily dietary management, especially, among specific subgroups of the population. Thus, the goals of this study were:* to identify socio-demographic and health variables associated with reading nutrition label information among a representative sample in Croatia; and* to explore the socio-demographic and health characteristics of \"label users\", in particular among older and younger populations, their reasons for reading nutrition labels, the most commonly read elements of the labels, and stimuli for easier understanding of nutrition information during food purchase.\nParticipants and methods: Participants\nResults: Participants\nDiscussion: Label reading\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to identify the socio-demographic and health variables of a representative sample of Croatian subjects over 15 years of age associated with reading nutrition labels and, in particular, to examine the association of age characteristics of \"label users\" with nutrition reading habits.\nDesign/methodology/approach: - A total of 1,011 randomly selected subjects over 15 years of age were interviewed by telephone. Moreover, 638 \"label users\" were recruited for assessing nutrition label habits.\nFindings: - Of the subjects, 36 per cent claimed that they had never, 25 per cent rarely, 19 per cent always and 15 per cent sometimes read nutrition labels. Females, participants with the highest levels of education, still undergoing education, physically active and on special diets claimed that they were more likely to read nutrition labels. Among \"label users\", younger participants mentioned \"curiosity\" as the most important reason for reading nutrition labels, while older participants more often pointed out \"wish for healthy eating habits\". \"Interpretational aids\" were often mentioned among younger participants for easier understanding of nutritional information, while older participants requested \"bigger letter size\".\nResearch limitations/implications: - The present study provides a starting point for establishing nutrition education targeted at specific groups of consumers. Research is limited to the subject's self-reported nutrition label reading. Future research is needed to explore the differences between \"label users\" and \"'non-users\".\nOriginality/value: - As the study showed, a notable difference exists between younger and older subjects regarding nutrition habits that could help experts toward effective communication. Nevertheless, the study could fill the research gaps in nutrition reading habits of specific groups of consumers in European countries, other than northern Europe.\n",
"role": "assistant"
}
] |
Age differences in responsiveness to shocking prosocial campaigns
|
[
"Empathy",
"Persuasion",
"Negative emotions",
"Aging process",
"Emotional appeals",
"Helping behaviours"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Faced with a challenging competitive environment and driven by a need for visibility, charities increasingly rely on shockvertizing to promote helping behaviours. These campaigns appeal strongly to emotions by deliberately emphasising the suffering generated by a humanitarian disaster (e.g. campaigns by Action against Hunger, Amnesty International). They use sensational content in an attempt to elicit strong emotional reactions (Bagozzi and Moore, 1994). Used in mass media (especially in the press, outdoor poster displays and TV) and fundraising mailing, shocking campaigns are broadcasted to a wide audience, regardless of the differences in audience profile. However, certain individual variables play an important role in helping behaviours.\n1. Theoretical background and objectives: 1.1 Role of emotional responses in the prosocial persuasion process\n2. Study 1: 2.1 Data collection and preliminary analyses\n3. Study 2: 3.1 Data collection\n4. General discussion: 4.1 Conclusion\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to study age differences in responsiveness to shocking prosocial campaigns. It specifically investigates the differences between seniors and young adults regarding the persuasive effect of emotional charity campaigns, the intensity of negative emotions and empathy elicited, the affect-based persuasive process leading to the ad effectiveness and the role of perceived self-efficacy.\nDesign/methodology/approach: Two studies are conducted on seniors (n1 = 173; n2 = 118) and young adults (n1 = 170; n2 = 123) exposed to appeals for \"Action against Hunger\" incorporating various levels of emotional content.\nFindings: The results indicate that seniors are more receptive to emotional campaigns but only for higher emotional ads and only regarding intention to donate. Young adults report lower levels of empathy, and findings do not suggest major age differences in the reported negative emotions after exposure to low or high emotional appeals. The affect-based persuasion differs; young adults are particularly influenced by negative emotions, whereas the persuasive effect on the elderly stems from an empathetic reaction. Self-efficacy moderates the effect of negative emotions only among older participants.\nPractical implications: Social marketers are advised to use vivid and realistic stimuli to reach young people and develop these appeals in social media. For an older target audience, however, marketers are advised to use creative components and media support designed to foster personalisation and proximity.\nOriginality/value: This study provides a better understanding of age-related effects on emotional persuasion, and addresses the specificity of prosocial appeals involving responsiveness to the suffering of others.\n",
"role": "assistant"
}
] |
Becoming a better leader: how action learning fosters authenticity and mindfulness
|
[
"Mindfulness",
"Action learning",
"Management development",
"Authentic leadership",
"Training programs"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Are great leaders born that way, or can leadership be taught? There is a growing belief that authentic leadership is an essential ingredient in creating sustainable business performance. And authenticity - consistently acting in accordance with a clear personal set of values - is not an obvious subject for successful transfer in a classroom setting. This is a personal attribute that develops with experience - which raises a question: can a long-running management development program expose prospective leaders to a series of experiences over time in such a way that they become more authentic?\nWhen does leadership become authentic?: There is as yet no single agreed definition of authentic leadership - a concept that's widely discussed together with ethical leadership, transformational leadership and positive organizational behaviour. Authenticity involves awareness of values, beliefs and emotions - it prompts greater self-awareness and positive self-development. Earlier empirical studies identify the essential components of authentic leadership as:\nMaking leadership mindful: Another concept currently receiving attention is mindfulness - awareness and attention to immediate experience, or \"living in the moment\". This means experiencing each moment as it occurs, without making any judgement. The aim is to make direct contact with any experience and so be able to respond to it in a more open, flexible way. Mindfulness appears to have quite a lot in common with authentic leadership, although they differ in their interpretation of unbiased processing of the situation. With authenticity, this means evaluating information objectively, without minimizing or exaggerating its implications.\nCan action learning can help to develop authentic leadership?: This well-known approach forms the basis of a three-year development program run by a firm of management consultants in Canada. Participants complete a week of \"soft management skills\" training prior to starting the program and then have five consecutive days of training every four months. Aimed at middle managers, it provides them with the opportunity to engage with real-life management issues in a setting that has many workplace characteristics, but is at the same time remote from where they normally work. Importantly, this relatively low-risk and supportive situation:\nLearning to lead in a low-risk environment: With access to the program over three years, the study brought together twice-yearly self-assessment questionnaires completed by five cohorts of students. This was supplemented by 24 semi-structured interviews with students at different stages of the program to provide a narrative and add meaning to the data analysis.\nComment: This article reviews \"Authentic leadership and mindfulness development through action learning\", by Baron (2016). In a longitudinal study, it examines the responses of middle managers to a Canadian leadership development program based on action learning principles. With the subjects evaluating their authenticity and mindfulness at different stages of the program, results suggest that undertaking action learning is effective in fostering these characteristics. Further research is recommended to identify the aspects of action learning that are most effective in promoting authentic leadership.\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to review the latest management developments across the globe and pinpoint practical implications from cutting-edge research and case studies.\nDesign/methodology/approach: This briefing is prepared by an independent writer who adds their own impartial comments and places the articles in context.\nFindings: Are great leaders born that way, or can leadership be taught? There is a growing belief that authentic leadership is an essential ingredient in creating sustainable business performance. And authenticity - consistently acting in accordance with a clear personal set of values - is not an obvious subject for successful transfer in a classroom setting. This is a personal attribute that develops with experience - which raises a question: can a long-running management development program expose prospective leaders to a series of experiences over time in such a way that they become more authentic?\nPractical implications: The paper provides strategic insights and practical thinking that have influenced some of the world's leading organizations.\nOriginality/value: The briefing saves busy executives and researchers hours of reading time by selecting only the very best, most pertinent information and presenting it in a condensed and easy-to-digest format.\n",
"role": "assistant"
}
] |
Building social capital and learning relationships through knowledge sharing: a social network approach of management students' cases
|
[
"Knowledge sharing",
"Social network analysis",
"Social capital theory"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Knowledge sharing is a key driver for organization success. Through personal and information networks, organizations become more responsive to planning, implementing and correcting courses of actions (Nonaka and Nishiguchi, 2001). From the knowledge-based view, a firm's ability to share, adapt and upgrade organizational knowledge effectively is a key to sustained competitive advantage (Alavi and Leidner, 2001; Grant, 1996). Organizational knowledge sharing involves employees' sharing tangible forms of knowledge such as documents and formulas as well as intangible knowledge such as previous experiences. These acts inherently involve behaviors of helping and collaborating for a common goal.\n2. Conceptual frameworks and research model: To examine how personal relationships and interdependencies influence knowledge sharing, we drew on concepts of social capital (Nahapiet and Ghoshal, 1998), social networks (Kadushin, 2012) and communities of practice (CoP) (Brown and Duguid, 1991; Wenger, 1998). Wang and Noe's (2010) review of the organizational knowledge sharing literature state that among all 76 empirical qualitative and quantitative studies examined in their review, only about one-third of all studies explicitly employed a theoretical framework, and most commonly used theories were the theory of reasoned action (for planned behaviors), social capital, social exchange and network theory. They also noted that about a dozen other theories and views (e.g. expectancy theory, knowledge-based view of firms and social learning) were used to explain knowledge sharing motives and behaviors (for a complete list, see p. 122). We saw a clear connection between theories of social capital, networks and CoP with the relationship and interdependence focused view of knowledge sharing, thus review them in the following. 2.1 Social capital\n3. Review of the literature and research hypotheses: This section reviews how the theory of social capital has been defined, measured and applied in the organizational knowledge literature. Among theories of social capital, social network and CoP, we adopted social capital as the primary and overarching theoretical framework for this study because it incorporates the primary importance of informal people relationships in both social network and CoP and further clarify types of relationship to focus. As part of this review, hypotheses were also drawn. 3.1 Knowledge sharing: exchange between and among individuals\n4. Methods: 4.1 Participants and data collection\n5. Findings: Density, diameter and degree centralization are important indicators for understanding network level characteristics. Density and degree centralization are both normalized, so it makes possible to compare different networks of a similar size in a meaningful manner. Diameter is the longest path in the network. It indicates the boundary of a network based on direct connections. Density is often interpreted as equivalent as the mean in traditional statistics. Density shows how a network's connection is cohesive. Density in a directed network is calculated as observed connections divided by all possible connections. For example, the number of possible connections within the examined knowledge sharing network's is 12,210 [= n*(n-1)]. In the same network, all observed dyadic relationships are 427. Density of knowledge sharing in this study's network is 0.034 meaning that 3.4% of all possible connections were actual relations (Table 2). Degree centralization represents how the observed network has the tendency of centralization. Higher number indicates a more hierarchical and centralized structure. Higher degree of centralization indicates specific ontology, such as a star shape (i.e. having a central figure in the network). Lower degree centralization indicates all or most nodes in the network have similar number of connections (Table 2).\n6. Discussion: The result from the network logistic regression analysis for the influence of social capital (i.e. structural, relational and cognitive dimensions, Nahapiet and Ghoshal, 1998) on knowledge sharing found a good efficacy of the proposed research model (total fraction correct of 0.98) and the stronger influence of the structural dimension than the relational and the cognitive dimension for predicting having knowledge sharing ties. In this section, we discuss what such findings imply and how they compare to other studies in the literature.\n7. Conclusion: This study is making a welcome contribution to the knowledge sharing literature by identifying the influence of interdependence among goal sharing members. As a concept that proposes reciprocal relationships as a source of change, and also as a tool that is rapidly growing and advancing, which enables researchers to examine system characteristics and interactions, social network analysis approach can be applied more in organizational knowledge research to see the process and influence of improving various intangible assets.\n",
"role": "user"
},
{
"content": "Purpose: This study has adopted theoretical frameworks of social capital, social networks and the Community of Practice to study how different types of relationships influence the knowledge sharing relationship. This paper aims to suggest that building social capital, particularly structural capital, is part of expected key in knowledge sharing networks.\nDesign/methodology/approach: Using social capital theory, through social network analysis of 111 management students in the US business school, identified key social capital dimension in knowledge sharing networks. To incorporate the interdependency among examined relationships, network logistic regression with the quadric assignment procedure was used.\nFindings: The proposed research model showed that about 98% of the existence of knowledge sharing relationship could be correctly predicted. Among three dimensions of social capital, this study found a superior influence of the structural dimension (i.e. task interdependence) in predicting having a knowledge sharing relationship. The significant effect of trust and friendship network on knowledge sharing was also found. Implications for practice and suggestions for future research were also discussed.\nOriginality/value: Existing literature as to how people learn through knowledge sharing is limited in at least two important ways. First, scholars of knowledge management acknowledge that organizational knowledge originates from dyadic relationships between or among individuals at work. However, prior research has heavily relied on survey responses from one's perception of knowledge sharing experience, viewing as unidirectional. Second, substantial attention of prior research has been devoted to the factors of individual attributes. Emphasizing individual interactions as the fundamental building blocks of learning, this study focuses more on relational characteristics of knowledge sharing based on social capital theory.\n",
"role": "assistant"
}
] |
A mixed methods empirical exploration of UK consumer perceptions of trust, risk and usefulness of mobile payments
|
[
"Perceived risk",
"Perceived usefulness",
"Perceived trust",
"Mobile phone",
"Contactless payments",
"Mobile payments"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Cash is an established payment method, but the adoption of new electronic payments such as mobile payments arises when consumers identify a perceived relative advantage (Riquelme and Rios, 2010). A mobile payment is a financial exchange between two parties using wireless technology that is supported by the widespread adoption of consumer-based technology (Ling, 2004) together with self-service technology adoption (Bolton and Saxena-Iyer, 2009). However, the existing academic literature includes mobile payments within mobile commerce and mobile banking (Jacob, 2007), although the European Payments Council (2012) mobile payment definition is broad enough in scope to cover the latest payment instrument developments that include contactless payments through cards and other consumer devices.\n2. Theoretical background: Mobile payments require a mobile device to \"initiate, authorize and/or confirm an exchange of financial value\" that can replace payments made with cash, cheque or payment cards (Zhong, 2009, p. 80). Furthermore, mobile payments do not restrict themselves to payments via a mobile phone (Karnouskos and Vilmos, 2004) as a mobile payment is based upon a portable device that has the relevant technology with wireless capability to transfer money electronically between two parties (Bourreau and Verdier, 2010; Turowski and Pousttchi, 2004). As a result, this includes Europay, MasterCard and VISA (EMV) contactless smart cards, although similar terms like proximity or remote payments are also used. Mobile payments entail a complex environment (Rochet and Tirole, 2002) and is a relatively new and continually evolving payment method in the UK (MasterCard, 2012; VocaLink, 2013) with various mobile payment organisations currently establishing different UK consumer services (Apple, 2014; Samsung, 2014).\n3. Research model and constructs: The technology acceptance model (TAM) assesses the core human psychological aspects of technology adoption (Davis, 1989) with a focus early in the outcome chain (Igbaria et al., 1994) and as a result, is a more effective model for exploring UK consumer interest in mobile payments when compared to other models (Mathieson, 1991). The TAM has been widely used in empirical studies of mobile banking (Maroofi et al., 2013) and mobile payments (Swilley, 2010), although no UK consumer studies have been identified. However, the two cognitive response constructs of perceived ease-of-use and perceived usefulness are not sufficient in their own right to explain an individual's acceptance of technology (Mathieson, 1991), although perceived usefulness is a vital element in encouraging consumers to change their habits (Ho and Ko, 2008). The inclusion of trust and risk constructs to the TAM is believed to be crucial to the successful adoption of mobile payments by UK consumers, although according to Koenig-Lewis et al. (2010), these constructs are not yet well explored in the mobile banking literature.\n4. Research methodology: An exploratory sequential mixed methods research design is used with a questionnaire as the first research instrument followed by semi-structured interviews which are complementary methods that support each other (Field, 2013). The use of mixed methods research as part of a post-positivist philosophy and a social constructionist ontology allows sense to be made of the subjective and socially constructed meanings of mobile payments data obtained from participants (Denzin and Lincoln, 2011).\n5. Analysis and findings: A range of participant data is obtained and analysed as shown in the Table I. Whilst questionnaire data are obtained from 120 participants, 15 of these are non-UK participants and are excluded which results in a total of 101 usable questionnaire responses that include 53 submitted electronically through LinkedIn and Facebook, whilst the remaining 48 are obtained in a face-to-face environment. In all, 78 per cent of the 101 questionnaire respondents are aged from 35 to 64 years old, of which 31.7 per cent are in the 45-54 years old age range compared to 17.5 per cent of the UK adult population. In addition, 3 per cent of the questionnaire respondents are aged from 16 to 24 years old compared to 14.5 per cent in the UK adult population. Furthermore, 69.4 per cent of the questionnaire respondents hold one or more degree qualifications, whilst 44.6 per cent also hold a post-graduate degree with minor variations in the findings identified following analysis of the nominal data.\n6. Discussion and implications: Despite the widespread adoption of consumer-based technology (IDC, 2014; Ling, 2004), UK consumers identify a number of risks with mobile payments although trust can offset these risks which positively influences perceived usefulness which is consistent with Iranian consumers and mobile banking (Mohammadi, 2015). Perceived usefulness is a key influence on consumer adoption of various self-service technologies (Kaushik and Rahman, 2015; Mortimer et al., 2015; Yadav et al., 2015). Mobile payment interest is negatively influenced by UK consumer perceptions of technology and security risks including the lack of PIN authentication, although the benefits and advantages can outweigh the risks for UK consumer in some very specific scenarios, e.g. queue avoidance in high volume situations including Transport for London and Mersey tunnel toll. Identifying and meeting a specific UK consumer need for mobile payments increase perceived usefulness and this encourages UK consumers to change their choice of payment method (Sathye, 1999) and overcomes barriers to adoption (Meuter et al., 2000). As a result, perceived trust positively influences perceived usefulness and mobile payment organisations will need to focus on identifying the specific benefits of mobile payments to UK consumers as mitigating risk and increasing trust does not compensate for the absence of usefulness (Eriksson et al., 2005).\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to explore UK consumer perceptions of trust, risk and perceived usefulness of mobile payments through the use of sequential mixed methods.\nDesign/methodology/approach: A post-positivist philosophy is used with a social constructionist ontology with a questionnaire as the first survey instrument using an empirical sequential mixed methods approach. Summary quantitative analysis of the questionnaire data is undertaken followed by semi-structured interviews that produce qualitative data on which content analysis is undertaken to assess and explore UK consumer perspectives. The technology acceptance model is used as the underlying framework on which a conceptual model is developed.\nFindings: UK consumers have significant risk and trust concerns with mobile payments, although these concerns can be overcome when clear consumer benefits are identified whilst bank supported mobile payments have an increased level of trust compared to new market entrants and other established companies. Furthermore, perceived trust positively influences perceived usefulness and mitigates perceived risk, whilst perceived risk negatively influences perceived usefulness. In addition, perceived usefulness significantly and positively influences UK consumer attitude which can lead to adoption.\nResearch limitations/implications: Whilst 120 completed questionnaire responses are received, only 101 questionnaires are used for analysis. In addition, ten semi-structured interviews are undertaken using a purposeful sample to minimise any imbalance (Oakley, 1981) which increases the reliability of the research findings (Hackley, 2003). This mobile payments research does not have a statistically secure universalisation of the findings, which negates the application of these research findings to other groups and to different social settings (Lincoln and Guba, 1985).\nPractical implications: Mobile payment organisations will need to focus on identifying the specific benefits of mobile payments to UK consumers as mitigating risk and increasing trust do not compensate for the absence of usefulness.\nSocial implications: UK consumers indicate a lack of awareness of existing contactless payment guarantees provided by UK banks, although these payment guarantees significantly increase UK consumer trust.\nOriginality/value: Both quantitative and qualitative empirical data are obtained on UK consumer perspectives of risk, trust and perceived usefulness of mobile payments using sequential mixed methods.\n",
"role": "assistant"
}
] |
The future of poverty and development in Africa
|
[
"Africa",
"Economic development",
"Epistemology",
"Emerging markets"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: When development studies emerged 60 years ago as a distinct field of study concerned with the structure and behavior of poor countries, the underlying framework was relatively simple for the economist. In the immediate aftermath of the Second World War, there was a measure of agreement that economic development meant the process whereby an economy's real national income increases over a long period of time. At the beginning of the so-called \"Development Decade\", in the early 1960s, United Nations Secretary General U. Thant described development as growth plus structural change (United Nations (UN), 1962). Since then, the notion of development has broadened substantially, even as full understanding of what needs to be done has remained rather elusive.In September 2000, the Millennium Summit of the United Nations adopted the Millennium Declaration as an expression of commitment to global development and poverty eradication. Subsequently, the Millennium Development Goals (MDGs) were formulated as a quantifiable set of targets to fulfill the desires expressed in the Declaration. The MDGs are the culmination of 50 years of thinking and policy about development. They also represent a shift from an original empirical grounding to a normative framing of the development question.This paper deals with how thinking and practice on African development might evolve over the next 25 years and how the \"development industry\" in Africa might change correspondingly. The paper's premise is that the epistemological shift embodied in the MDGs is the outcome of the interplay between ideas and practice. Accordingly, it explores the significance of this continuing interplay for Africa over the next 20-30 years.The next section provides a chronological sketch of the evolution of development thinking, ending with some thoughts on the epistemological context of the MDGs. Engaging the theme of this special issue of foresight, Section 3 considers the prospects for an alternative African development narrative, and outlines a possible scenario developed via a prosperity optic rather than one of poverty alleviation. Section 4 concludes with a summary of the main messages.\n2. The evolution of development thinking: The first 40 years\n3. Towards an African development narrative: The world is on the cusp of a new paradigm in development thinking and practice. The new paradigm will be the outcome of the contradictions that have emerged in implementing the MDG agenda against the backdrop of the dynamics of globalization. Already the new development narrative is taking shape and bringing back strong overtones of Gerschenkron's perspective on the industrialization of late starters. The new narrative will reflect the positive and negative aspects of the current development consensus as well as the ongoing realignment of global social forces.Beyond the MDGs\n4. Conclusion: This paper has examined how the conceptualization of development has evolved in the last 60 years, and how the \"development industry\" in Africa might change over the next 25 years horizon, given global trends and emerging realities. It explored the interplay of ideas and practice, identified key global drivers and considered their significance for Africa over the over the next generation. It asked what, beyond the MDGs, should be the Africa development narrative for the next few decades.Adopting a historical approach, the paper chronicled the changes in the epistemological foundations of development thinking over 60 years of development theory and practice. It also explored how concurrent changes in the international context for development have influenced both the thinking on and management of development.It was shown that from an original grounding in historical realities the contemporary context of development is based on aspirations. The ends of development used to derive from the experiences of successful developers, and success was envisaged in terms of the attributes that actually accompanied historically successful development stories. The current paradigm, codified in the MDGs, apprehends development through the prism of assumed universal norms, and aims for the time-bound attainment of some minimum standards.Broadly, the post-MDG development narrative will remain on the same path because it is coming out of the same process. To address the challenge of African prosperity over the next 20-30 years requires an African development narrative generated endogenously. A scenario analysis was undertaken to identify the possibility for Africa as \"the land of the future\". The core messages include Africa rejecting its portrayal as \"victim\" in the international system and replacing the poverty ideology with one of prosperity as an attainable goal. It must also reject the condescension implicit in regarding Africa as a \"special case\" that requires continuing international intervention. The message must exhort Africa to become globally competitive: it must innovate or die.Successful African development will depend on the energy of all Africans and the ability of its leadership to generate extraordinary performance from an apparently ordinary combination of circumstances. The leadership that will pursue this message must, among other attributes, gain the trust and respect of the people, be willing to sacrifice and be able to create a vision that resonates with citizens. The leadership will be transparent in actions and accountable to the public. It will ensure access to basic social services and be aggressively committed to raising living standards.\n",
"role": "user"
},
{
"content": "Purpose: - This paper aims to examine how the conceptualization of development has evolved and how, given emerging global economic trends, this might affect the development industry in Africa. It explores the interplay of ideas and practice, identifies key global drivers and considers their significance for Africa over the next generation.\nDesign/methodology/approach: - Adopting a historical approach, the paper chronicles the changes in the epistemological foundations of development thinking over 60 years of development theory and practice. It also explores how concurrent changes in the international context for development have influenced both the thinking on and management of development. The paper undertakes a scenario analysis in search of an African development narrative that is more appropriate to the challenge of African prosperity over the next 20-30 years.\nFindings: - It is shown that the contemporary view of development represents an epistemological shift from a perspective defined by the actual experiences of successful developers, to one defined through the prism of some assumed universal norms. Focusing on a particular scenario of Africa as the \"land of the future\", the paper suggests that Africa should reject its portrayal as \"victim\" in the international community, replace the poverty ideology with one of prosperity, and reject the condescension implicit in regarding Africa as a \"special case\" that requires continuing intervention. It stresses that the Africa of the future must be globally competitive.\nPractical implications: - To address the challenge of African prosperity over the next 20-30 years, an African development narrative generated endogenously and sustained by the energy of the continent's people, is required. Such a narrative requires that African leaders take full responsibility for Africa's destiny and actively develop a uniquely African story and embrace the African development project.\nOriginality/value: - The study provides a historically based alternative perspective to the MDGs as a framework for considering the future of Africa over the next several decades.\n",
"role": "assistant"
}
] |
Assessing the effects of multichannel service provider corporate reputation on customer new product adoption and RFM value
|
[
"Mediation",
"Moderation",
"Customer lifetime value",
"Customer product adoption proneness",
"Customer-based corporate reputation",
"Multichannel services",
"RFM value"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Companies increasingly realise the benefits of a favourable corporate reputation (Gaines-Ross, 2010; Swoboda et al., 2016), especially among customers. Customer-based corporate reputation (CBR), as applied to service firms, refers to customers' overall evaluation of the firm, according to their reactions to the firm's offerings and communications, as well as their interactions with it (Walsh and Beatty, 2007). Previous research reveals that CBR affects a vast range of outcomes: customer trust (Swoboda et al., 2017), loyalty, satisfaction (Jin et al., 2008; Swoboda et al., 2013, 2017), word of mouth (Park and Lee, 2009), online product returns (Walsh et al., 2016), commitment, and share of wallet (Ali et al., 2015; Walsh, Bartikowski and Beatty, 2014). Scholars also point to the indirect outcomes of a favourable reputation, such as facilitating the adoption of new products (Corkindale and Belder, 2009) or driving customer value (Walsh, Beatty and Shiu, 2009). However, customers' new product adoption proneness (CPA) or recency, frequency, monetary (RFM) value have not been investigated empirically as outcomes of reputation efforts. Prior research also tends to consider CBR in single service channels, such as conventional (Purohit and Srivastava, 2001; Swoboda et al., 2013) or online (Caruana and Ewing, 2010; Walsh et al., 2016) retailing. But many retailers sell products through multiple channels, such that their offline and online reputations likely have mutual influences and combine to form the retailers' overall reputations. A dearth of research examines the effect of CBR on relevant outcomes for multichannel service providers such as retailers; rather, insights into the mechanisms by which CBR affect relevant customer outcomes (especially monetary ones) are still nascent, without addressing contextual factors (Walsh, Bartikowski and Beatty, 2014).\nBackground and hypotheses: Firms develop and deploy market-based assets, such as favourable reputations, to increase customer value and firm performance (Sarstedt et al., 2013). A customer-based reputation is a relational market-based asset that results, at least in part, from mutually beneficial customer-firm relationships. Because reputational perceptions are relatively stable over time (Bromley, 2000), they represent an appropriate antecedent of behavioural outcomes. From a relational perspective, the proposed conceptual model (see Figure 2, presented subsequently) elaborates CPA and RFM value as two important outcomes of CBR, both of which likely drive firm performance (Kyrgidou and Spyropoulou, 2013; Walsh, Mitchell, Jackson, and Beatty, 2009). Henard and Dacin (2010) investigated the effect of a reputation for product innovation on customer outcomes and find influences on customers' excitement and loyalty towards the innovative firm. Walsh, Bartikowski and Beatty (2014) showed that corporate reputation affects share of wallet, as an important downstream monetary variable. However, CBR's association with RFM value has neither been examined before, nor has empirical research considered the existence of a price reputation dimension. Several approaches to corporate reputation capture customer perceptions, either exclusively or together with perceptions of other stakeholder groups (cf. Sarstedt et al., 2013), including Schwaiger's (2004) and Helm's (2005) formative reputation measures, Cravens et al.'s (2003) reputation index, and Fombrun et al.'s (2000) reputation quotient. Walsh and Beatty's (2007) CBR scale was developed expressly for service contexts though, so it constitutes the focus of this study and the theoretical rationales for extending (reconceptualising) it.\nMethod: Measures\nResults: Instrument validation\nDiscussion: As service providers and retailers increasingly encourage customers to shop in multiple channels, a growing management concern pertains to the best way to allocate valuable, rent-earning resources across different channels. A retailer's reputation is one such resource. An increasing interest in the antecedents and consequences of corporate reputation has prompted a gradually evolving knowledge base related to the effects of a good reputation on important downstream variables. Although service and broader management literature have made major strides in clarifying the outcomes of CBR, a conspicuous lack of empirical support exists for the notion that CBR affects customer adoption behaviour and customer value. This gap is surprising, especially considering that adoption behaviour drives firm success (Bartels and Reinders, 2011), and many retailers and other service providers use customer value to evaluate their corporate strategy, spending (Bolton, 2004), and customer targeting campaigns (Kumar, 2010; Petersen et al., 2009). With a focus on CPA and RFM value as critical but understudied outcomes of CBR, this study examines the contingent links from CBR, in relation to multichannel retailers. Drawing on extant management research (Walsh, Bartikowski and Beatty, 2014) and the signalling theory (Spence, 2002), it affirms the links of retailer reputation with customer perceived risk and perceived value. Furthermore, it reveals significant connections between perceived risk and perceived value, between perceived value and CPA and RFM value, and between CPA and RFM value. At least one moderating effect of channel type also emerges: the positive link of perceived value with CPA and RFM value is stronger for the online (separated) than the offline channel. However, the other links do not differ across channels, and together, these findings suggest some notable theoretical and managerial implications.\n",
"role": "user"
},
{
"content": "Purpose: Service providers leverage their corporate reputation management efforts to increase revenues by shaping customer attitudes and behaviours, yet the effects on customer innovation adoption and customer value remain unclear. In an extended conceptualisation of customer-based corporate reputation (CBR), the purpose of this paper is to propose that customer perceived risk, perceived value, and service separation are contingencies of the relationship between CBR and two key customer outcomes: customer new product adoption proneness (CPA) and recency-frequency-monetary (RFM) value.\nDesign/methodology/approach: Using a predictive survey approach, 1,001 service customers assess the online or offline operations of six multichannel retailers. The hypothesised model is tested using structural equation modelling and multigroup analysis.\nFindings: The analysis reveals significant linkages of CBR with perceived risk and perceived value, as well as between perceived risk and perceived value and from perceived value to CPA and RFM value. These linkages vary in strength across unseparated (offline) and separated (online) services.\nResearch limitations/implications: This study uses cross-sectional data to contribute to literature that relates CBR to relevant customer outcomes by considering CPA and RFM value and investigating contingent factors. It provides conceptual and empirical evidence that price appropriateness represents a new CBR dimension.\nPractical implications: The results reveal that CBR reduces customers' perceived risk and positively affects their perceived value, which drives CPA and RFM value. Multichannel retailers can create rewarding customer relationships by building and nurturing good reputations.\nOriginality/value: This study is the first to link CBR with customer product adoption proneness and value, two important customer measures. It proposes and tests an extended conceptualisation of CBR.\n",
"role": "assistant"
}
] |
Social presence and customer brand engagement on Facebook brand pages
|
[
"Social presence",
"Thailand",
"Social media",
"Customer brand engagement",
"Facebook brand page",
"Lurkers",
"Smart IT devices"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Companies have long sought to foster brand loyalty by engaging consumers in brand communities (Muniz and O'Guinn, 2001; McAlexander et al., 2002). The concept is not new, but the tools companies can use to build brand communities are shifting drastically for consumer segments that spend much of their time online. Social media, in particular, is becoming ubiquitous. Social media has the ability to foster brand community, and companies have rapidly added social media to their marketing and brand-building activities (Hollebeek et al., 2014; Dessart et al., 2015). By early 2016, 97 per cent of Fortune 500 companies used LinkedIn, 84 per cent used a Facebook brand page and 84 per cent used Twitter (Barnes and Griswold, 2016). One of the most common aims is to create brand communities (Gummerus et al., 2012), and \"social media brand community\" or \"fan page brand community\" have become commonly used terms (Goh et al., 2013; Laroche et al., 2013; Habibi et al., 2016).\nLiterature review: Brand perceptions\nMethodology: This research uses qualitative in-depth interviews. Small-sample qualitative research clearly cannot \"prove\" the model implied by the propositions in the way that large-sample quantitative data can statistically test formal hypotheses. One can nevertheless assess the feasibility of the propositions by demonstrating that responses are consistent with what the propositions say. While this is not the most common use of qualitative phenomenological research (i.e. research into understanding people's perceptions and thinking), it is well within the range of issues that qualitative methods can handle well. Qualitative methods sometimes are even applied to deductive work (Eriksson and Kovalainen, 2008).\nAnalysis and results: The following sections discuss the results from the in-depth interviews. The constructs have been discussed in the literature review, and the analysis from both the managers and the customers' point of view aimed to assess how well thinking in this context fit that conceptual framework. In general, managers and customers do not differ very much on how they view things, except that the managers do not seem very aware of the concept of \"lurking\" in an online environment and do not consider simple information consumption on the Facebook brand pages to constitute \"engagement\" in the brand community, while customers do consider this to be a form of engagement.\nConclusion: As social media technology increases, marketers aim to engage customers in online brand communities to enhance the customer brand experience. One key to doing this well is knowing how to manage the customer relationship through social media engagement (Gummerus et al., 2012; Goh et al., 2013). This is, however, fairly new territory for most marketing managers. The CBE concept itself is still being developed and needs deeper conceptual exploration (Hollebeek 2011); Barger et al. (2016) even claim that there is still a substantial lack of consensus about exactly what CBE is. Using social media to manage CBE effectively is not yet very well understood either (Hanna et al., 2011; Tsimonis and Dimitriadis, 2014). In exploring this concept, this research contributes toward a better understanding of how users interact with the corporate fan page, which will help reduce these deficiencies.\n",
"role": "user"
},
{
"content": "Purpose: Business use of social media is increasing rapidly as marketers aim to increase customer brand engagement (CBE) in brand communities to enhance the brand experience. However, use of social media for marketing communications is not well understood. This study aims to examine manager and user views of CBE and effects of perceived social presence (PSP) on CBE in a corporate Facebook environment.\nDesign/methodology/approach: Qualitative in-depth interviews are conducted with 18 users of corporate Facebook - i.e. those who visit corporate Facebook regularly - and four marketing managers whose companies operate Facebook pages. The industry is smart-information technology devices, and the cultural context is Thailand.\nFindings: Corporate Facebook sites with high SP functions foster customer engagement on cognitive, emotional and behavioral levels. PSP enhances product knowledge and encourages return page visits. Thus, CBE and PSP build brand trust and loyalty. However, managers may need to focus more on the nature of the brand community beyond the corporate Facebook page.\nPractical implications: Companies that enrich their corporate Facebook with SP features can encourage CBE. Consumers feel more informed about the brand and therefore feel more positively about it, which enhances the brand experience and brand trust. This holds even for lurkers who do not behaviorally engage on the Facebook page much, but who are active in the broader brand community.\nOriginality/value: This paper studies the relationship between PSP and CBE in a corporate Facebook environment, providing insights into how PSP influences CBE, which can enhance the customer's brand experience and contribute to brand perceptions.\n",
"role": "assistant"
}
] |
On the relationship between purchasing integration and purchasing decision-making speed
|
[
"Structural equation modeling",
"Purchasing decision-making speed",
"Purchasing integration"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Turbulent markets and increased global competition call for fast decisions (Kownatzki et al., 2013). Fast decisions are important in purchasing because of the increasing volatility of prices and demand and the increasing number of supply chain disruptions due to, for example, natural catastrophes (Brandon-Jones et al., 2014; Christopher and Holweg, 2011). Purchasing decision-making speed can affect overall firm success because the purchasing function significantly affects overall costs, input quality and innovativeness, and the flexibility and risk profile of a firm (Cousins and Spekman, 2003; Johnson et al., 1998; Kaufmann et al., 2012). Decision-making speed refers to how quickly organizations can execute all aspects of the decision-making process, from the time it takes to identify a problem to the point at which a final decision is made (Forbes, 2005).\nTheoretical grounding and hypotheses development: Information-processing view\nResearch method: The survey\nAnalysis and results: We applied structural equation modeling to test our research model. In particular, we used AMOS 21. The goodness-of-fit indices for our structural model (kh2/df=1.573; CFI=0.934; TLI=0.920; RMSEA=0.062; SRMR=0.066) imply good fit (Hu and Bentler, 1999). Figure 2 shows the structural equation model results, including path coefficients and significance levels for our hypothesized relationships.\nDiscussion and implications: Our findings have implications for the literature on strategic decision making and integration, which we discuss in turn. First, our study contributes to the strategic decision-making literature by examining the decision process dimension decision-making speed for the first time in a purchasing context. Decision-making speed is essential in today's volatile business world of highly vulnerable global supply chains (Christopher, 2000). Previous research has demonstrated that to enhance their performance in unstable environments, firms must make decisions quickly (Baum and Wally, 2003; Eisenhardt, 1989; Souitaris and Maestro, 2010). This study specifically extends knowledge on the role of decision-making speed in a purchasing context by showing that strategic and lateral purchasing integration are two antecedents that positively influence decision-making speed. This study further provides evidence that in environments in which the speed of decision making matters, firms should create structural mechanisms and lateral relationships to enhance their information-processing capabilities (Galbraith, 1974; Peng et al., 2013). Strategic and lateral purchasing integration are thus organizational design choices that improve information-processing capabilities, which in turn help reduce uncertainty through the allocation of more information (Egelhoff, 1991; Trautmann et al., 2009). Our study complements work on decision-making speed in different environmental contexts, such as hospitals, biotechnology, textiles, and microcomputer firms (Eisenhardt, 1989; Judge and Miller, 1991), by examining determinants of decision-making speed on a business functional level across industries. This represents an important extension, as decision processes within a business function can differ from firm decision processes (Kownatzki et al., 2013) and findings across industries have not been consistent (Baum and Wally, 2003; Judge and Miller, 1991).\nLimitations and conclusion: Our contributions to research are twofold. First, this study expands knowledge of the strategic decision process dimension decision-making speed. It is a first attempt to empirically test the relationship between decision-making speed and its antecedents in a purchasing context, showing that intra-organizational purchasing integration positively influences decision-making speed. Second, our research takes an additional step toward clarifying the relationship between strategic and lateral purchasing integration. In doing so, this research moves the discussion on intra-organizational integration to a more detailed level. Our study contends that strategic purchasing and lateral purchasing integration are two types of intra-organizational integration that should be analyzed jointly. Lateral purchasing integration emerges as a central element of the intra-organizational integration process because it transmits the positive impact of strategic purchasing integration on purchasing decision-making speed.\n",
"role": "user"
},
{
"content": "Purpose: First, this study expands knowledge on the strategic decision process dimension decision-making speed by analyzing decision-making speed and two possible antecedents in a purchasing context. Second, it takes an additional step toward clarifying the relationship between strategic and lateral integration. Specifically, the purpose of this paper is to analyze the potential mediating effect of lateral purchasing integration on the relationship between strategic purchasing integration and purchasing decision-making speed.\nDesign/methodology/approach: This research analyzes survey data of 152 firms from Austria, Germany, and Switzerland using covariance-based structural equation modeling.\nFindings: The results of the structural equation model provide strong support for the hypothesized relationships. Strategic purchasing integration drives lateral purchasing integration, which in turn positively influences purchasing decision-making speed.\nResearch limitations/implications: This study focusses solely on internal types of integration. A logical next step would be to further enrich the model by including external dimensions, such as supplier or customer integration.\nPractical implications: This study should help managers gain a better understanding of the relationship between strategic and lateral purchasing integration, highlighting their positive impact on decision-making speed. Decision-making speed is particularly important for companies operating in volatile markets and time-constrained business environments.\nOriginality/value: This study offers new insights into the theoretical and empirical connection between intra-organizational purchasing integration, unpacked as strategic purchasing integration and lateral purchasing integration, and purchasing decision-making speed. Furthermore, it offers insights into decision-making speed in a purchasing context.\n",
"role": "assistant"
}
] |
B2B brand equity: investigating the effects of human capital and relational trust
|
[
"Human capital",
"Brand equity",
"Relational trust",
"B2B",
"Professional services"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Learning about factors affecting brands and outcomes of branding strategies in a business-to-business (B2B) setting has become imperative as industrial markets are becoming increasingly commoditized (Beverland et al., 2007; Li, 2011; Marquardt, 2013). The problem of commoditization is also challenging for companies offering services to clients in B2B markets (Turner, 2009; Ross, 2016). Despite the fact that this problem has been known to providers of B2B services since the 1980s (Knechel, 2007), it has not been until the past two decades that researchers have started to examine the specific challenges associated with building strong service brands and enhancing their brand equity (Biedenbach et al., 2011; Marquardt, 2013; Kumar et al., 2015; Nyadzayo et al., 2016). Previous studies on branding in industrial markets also point to the need to investigate further how the service providers' attitudes and behaviors influence B2B brand equity (Davis et al., 2008; Baumgarth and Binckebanck, 2011; Biedenbach et al., 2011).\n2. Theoretical framework: 2.1 B2B brand equity\n3. Methodology: Our examination of how clients' perceptions of a service provider's human capital and the relational trust affect the development of brand equity is based on data from organizations that purchase professional services in Sweden. The contact information of potential respondents was acquired through the client database of one of the Big Four auditing firms. The study was conducted among randomly selected clients of this service provider. The sample included CEOs and CFOs, who were in charge of purchasing auditing, accounting and consultancy services. Before collecting the data, the survey was pre-tested on an expert group of marketing and management executives from the auditing firm. Trained interviewers then contacted the respondents and completed the surveys over the phone. The total sample included 3,030 organizations. The interviewers were unable to reach 56 per cent of these organizations due to wrong numbers or unanswered calls. Overall, the data collection resulted in 632 valid surveys and a response rate of 21 per cent.\n4. Analysis and results: The analysis was conducted by using SPSS 23 and AMOS 24. Before testing the hypothesized effects, we examined the presence of a common method bias. The common method bias was assessed by conducting Harman's single-factor test (Podsakoff et al., 2003). The exploratory factor analysis based on the principal components method did not result in a single predominant factor. By restricting the extracted number of factors to one, the one-factor solution extracted a total of 44.77 per cent of the variance. The rotated five-factor solution resulted in a cumulative variance of 67.66 per cent. The identified five components were matching the analyzed variables. In addition, the common method bias was tested by using the common latent factor method in structural equation modeling (Podsakoff et al., 2003). By including the common latent factor in our measurement model, we calculated the common variance to be equal to 20.25 per cent. Based on these results, we can conclude that there was no common method bias present in this study.\n5. Discussion: The findings reported in this study point to the important role that human capital plays for enhancing brand equity and building a strong B2B brand. These results advance previous studies, which indicate the critical impact that employees providing services in industrial markets have on clients' perceptions about their respective companies (van Riel et al., 2005; Baumgarth and Binckebanck, 2011; Biedenbach et al., 2011; Marquardt, 2013). As hypothesized, we found positive effects of human capital on brand associations, perceived quality and brand loyalty, which represent the core dimensions of B2B brand equity (see Table III). These findings demonstrate that whereas the professional services offered by a service provider in a B2B context include many standardized components, the employees' interactions with the clients may include unique tasks that require special knowledge, skills and creative problem-solving capabilities. The results confirm theoretical propositions about the outstanding impact that an individual employee has on not only enabling a service provision but also shaping current and potential clients' perceptions about the entire company (Berry, 2000; Youndt et al., 2004; Vomberg et al., 2015). The support for H3a thus illustrates the close connection between the clients' perceptions about service provider's knowledge, skills and abilities as well as the clients' brand associations. The support for H4 is further evidence of the role that human capital plays in creating and maintaining trust in the relationships between the clients and providers of professional services. Developing previous insights about relational trust (Selnes and Sallis, 2003), our findings confirm that the relational trust in industrial markets is based on the clients' general beliefs in the service provider's ability to fulfill contractual agreements and to assist the client should problems arise. The relational trust is, thus, founded on the clients' previous experiences from earlier interactions with the service provider, which shape their expectations about future experiences. With regard to such expectations, the support for H3b and H3c illustrates how human capital affects the clients' perceptions of service quality and their loyalty intentions. The support for H2 demonstrates that brand loyalty is determined by not only clients' judgmental views of human capital but also their general evaluations of perceived quality. Therefore, clients' brand loyalty can be seen as being founded on a combination of having received services of high quality in the past and having positive expectations about the service provider's ability to deliver such services in the future.\n6. Conclusions: The purpose of this study was to investigate the effects of human capital and relational trust on B2B brand equity. By examining the clients' perceptions of human capital and relational trust, our research has answered the calls for studies on the determinants of B2B brand equity (Leek and Christodoulides, 2011; Seyedghorban et al., 2016). Specifically, we have explored how hierarchical effects between the B2B brand equity dimensions are affected by factors of central importance in the relationships between service providers and their clients. In this study, human capital, which measures the clients' perceptions of service provider's knowledge, skills and abilities (Youndt et al., 2004), was found to have a major influence on brand associations, perceived quality and brand loyalty. Leading to the creation of relational trust and having a strong impact on the dimensions of B2B brand equity, human capital can be seen as an organizational asset that needs careful management. Acknowledged in previous research (van Riel et al., 2005; Baumgarth and Binckebanck, 2011; Biedenbach et al., 2011; Marquardt, 2013) and demonstrated in this study, the clients' perceptions about the employees of companies providing services in industrial markets are critical for enhancing the brand equity of B2B brands.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this study is to investigate the effects of human capital and relational trust on business-to-business (B2B) brand equity.\nDesign/methodology/approach: Data collection was conducted among the clients of one of the Big Four auditing firms in Sweden. Structural equation modeling was used to test the hypothesized effects.\nFindings: The results demonstrate positive effects of human capital and relational trust on the core dimensions of brand equity. In the context of the professional services, human capital was found to have a stronger direct impact than relational trust on brand associations, perceived quality and brand loyalty.\nPractical implications: The study provides practical recommendations for marketing managers on how to consider the nature of B2B brand equity and its determinants in developing successful branding strategies. The findings indicate that although relational trust has a positive impact on brand equity, it draws on the clients' positive perceptions of the service providers' human capital. Thus, investments that generate positive perceptions of a service provider's human capital will strengthen its competitive position. Leading to the creation of relational trust and having a strong impact on the dimensions of brand equity, human capital is a strategic asset that needs careful management.\nOriginality/value: The study advances extant knowledge on B2B brand equity by examining contextual conditions and factors that are critical for building strong brands in industrial markets. The study demonstrates that clients' perceptions about the knowledge, skills and abilities of service providers are more important than relational trust for enhancing B2B brand equity.\n",
"role": "assistant"
}
] |
Mistaking demographic segments for people: another source of customer abuse
|
[
"Retailing",
"Market segmentation",
"Personal selling",
"Prejudice"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: There is a national electronics retail chain in the USA which many of my friends avoid. Though providing low prices, decent quality products and a wide selection of brands, any effort to check out a sale is planned as a swift stop. If I want to see if something is in stock, I make a very fast trip while my wife insists on waiting in the car. The reason is quite simple, yet apparently impossible for us to convince the store managers: the place is obnoxiously overbearingly loud, akin to a popular rock music bar on a Friday night. Only it is not music playing, it is noise. Multiple stereo and television floor displays are cranked up and playing a variety of different CDs, DVDs or programs, such that the overwhelming sound fills every area of the store, even areas for the customers shopping for a computer, telephone or carrying case.\n",
"role": "user"
},
{
"content": "Purpose: - The article's purpose is to note the misuse of the planning tool of segmentation for a priori improper judgments of individual customers when sales staff in stores treat men and women who walk in the door based on their demographic group membership instead of as people.\nDesign/methodology/approach: - The article notes that segments are merely statements of what is \"more likely\" from certain types of group members, and that dealing with individuals requires a different mind set.\nFindings: - The pragmatic business need is to understand how to attract and retain customers of all types, not just those that might fit a profile. What might be attractive to many young people could at the same time discourage older customers, or even \"different\" young people. Certain types of potential customers might appear out of place in a particular store, but that does not translate into them being undesirable parts of store traffic.\nOriginality/value: - This article notes an important observation that many store clerks, restaurant servers, or store specialists in atmospherics might ignore, forget or maybe were never told: all people of any type could be high-spending customers, regardless of the demographic group that is the business' primary target market. At best, people who quickly walk out the door are lost sales. At worst, they could make complaints of racial discrimination.\n",
"role": "assistant"
}
] |
The curvilinear relationship between knowledge leadership and knowledge hiding: The moderating role of psychological ownership
|
[
"Psychological ownership",
"Knowledge management",
"Knowledge hiding",
"Knowledge leadership",
"Curvilinear"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Organizational viability and success require leaders to advocate knowledge transfer within organizations (Evans et al., 2015; Riege, 2005). The effective knowledge transfer largely relies on individual motivation and willingness to share knowledge with other members (Gagne, 2009). Scholars have, therefore, begun to search for ways to encourage knowledge sharing from different perspectives such as interpersonal trust, managerial support and organizational culture (Nerstad et al., 2018; Wang and Noe, 2010; King and Marks, 2008; Abrams et al., 2003). Despite efforts to promote knowledge sharing, success has been elusive because of employees' intentional attempts to \"withhold or conceal knowledge that has been requested by another person\" (Connelly et al., 2012, p. 65), which is defined as knowledge hiding (Connelly et al., 2012). Knowledge hiding not only means individuals contribute less to organizational knowledge (Lin and Huang, 2010), but is also detrimental to employees' thriving (Jiang et al., 2019), innovative behaviors (Cerne et al., 2014, 2017), interpersonal relationship (Connelly and Zweig, 2015), and individual-and-unit creativity and performance (Bogilovic et al., 2017; Evans et al., 2015).\nTheoretical background and hypotheses: Knowledge leadership and employee knowledge hiding\nMethod: Sample and procedure\nMeasures: Given the survey was performed in China, the measurements in this study were all translated into Chinese using translation-back-translation procedure (Brislin, 1980).\nResults: Confirmatory factor analysis (CFA)\nDiscussion: Recently, knowledge leadership literature has examined the importance of knowledge leadership in promoting knowledge management practices (Lakshman, 2007, 2009; Yang et al., 2014; Donate and de Pablo, 2015). However, little is known about how knowledge leadership contributes to employees' knowledge management behavior especially for knowledge hiding. In this study, we examine the inverted U-shaped relationship between knowledge leadership and knowledge hiding. Moreover, we find that psychological ownership moderates the linkage between knowledge leadership and knowledge hiding. Specifically, the inverted U-shaped relationship between knowledge leadership and knowledge hiding is more significant among employees with higher psychological ownership while the inverted U-shaped relationship becomes weaker among employees with lower psychological ownership.\nTheoretical implications: This study has made several theoretical contributions to the extant literature. First, it helps further our understanding of cognitive evaluation theory by examining that different levels of knowledge leadership have different influence on individuals' intrinsic motivation. According to cognitive evaluation theory, tangible rewards, competition and evaluation tend to undermine intrinsic motivation, while positive interpersonal context and feedback tend to enhance intrinsic motivation (Gagne and Deci, 2005; Gagne, 2009). In line with cognitive evaluation theory, the findings of the curvilinear relationship between knowledge leadership and knowledge hiding suggest that lower levels of knowledge leadership, which contributes to subordinates' perceiving knowledge as the competitive resource, undermines intrinsic motivation in knowledge exchange, resulting in more knowledge hiding. However, higher levels of knowledge leadership, which promotes positive interpersonal contexts such as mutual trust and cooperation and provides more positive feedback to group learning and development, can enhance subordinates' intrinsic motivation to more knowledge exchange, leading to less knowledge hiding. That is, social contexts like knowledge leadership can impact intrinsic motivation differently depending on whether the process is at a low or high level.\nManagerial implications: This study also has some practical implications. First, the findings substantiate the value of knowledge leadership in decreasing knowledge hiding. Accordingly, organizations should encourage and help leaders cultivate knowledge leadership and appropriate management skills, both of which are significant to promote team learning and development and help employees develop mutual trust and cooperation, thereby diminishing the behaviors of employees engaging in knowledge hiding.\nLimitations and future research directions: This study has some limitations. First, despite the fact that we primarily link knowledge leadership to knowledge hiding based on cognitive evaluation theory, we do not examine the motivational mechanisms such as controlled motivation and introjected motivation as the possible predictors of knowledge hiding. Thus, future research on the mediating mechanisms is needed to more thoroughly understand the process of how knowledge leadership may influence knowledge hiding.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to examine the curvilinear relationship between knowledge leadership and knowledge hiding and the moderating role of psychological ownership on influencing the curvilinear relationship.\nDesign/methodology/approach: In total, 403 data were collected from participants in a high-technology company via a two-wave survey. Hierarchical regression analyses were used to test the hypotheses.\nFindings: Results revealed an inverted U-shaped relationship between knowledge leadership and knowledge hiding. The employees exhibited more knowledge hiding in a moderate level of knowledge leadership than in lower and higher levels of knowledge leadership. Moreover, psychological ownership significantly moderated the curvilinear relationship such that the inverted U-shaped relationship was more pronounced among employees with high psychological ownership.\nPractical implications: Employees' reaction to knowledge leadership may vary from different levels of knowledge leadership. Moreover, organizations should boost employees' psychological ownership especially for the collective identity that helps them own knowledge as \"ours.\"\nOriginality/value: This study extends both the leadership and knowledge management behavior literatures.\n",
"role": "assistant"
}
] |
Modelling the synergy between fiscal incentives and foreign direct investment in Ghana
|
[
"Fiscal incentives",
"Foreign direct investment",
"Short run",
"Long run",
"ARDL model",
"Ghana"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: A foreign direct investment (FDI) takes place when an investor establishes foreign business operations in a foreign country (Chen et al., 2019). There has been intense competition among developed and developing countries to attract FDI in recent years. This competition for FDI is due to the fact that foreign capital creates employment and economic growth, augments the productive capital of a country, promotes transfer of technology and skills to the host country and hence helps alleviate poverty, among other benefits (Obeng, 2014). In the past decades, many nations especially developing nations have been making frantic efforts directed toward the attraction of FDI. The expectation is that FDI would contribute positively to economic growth of the host nation (Ugwu, 2018). FDI can be viewed as assets, which augment other investment streams and make a distinguishing contribution to the growth process of a country (Asante and Gyasi, 2000). With the low level of savings that characterized emerging markets, i.e. developing economies like Ghana, FDI can be a very useful alternative to boosting the productive capacity of such economies, which will lead to growth and job creation (Majavu and Kapingura, 2016). These expected inherent gains in FDI inflows have led to competition for FDI inflows among various countries especially developing countries. Generally, this competition usually presents a herculean task for African emerging markets due to the common image that Africa can be a high-risk investment region (Gumo, 2013).To this end, these economies usually go the extra-mile to put various measures in place to be adjudged as business friendly destinations. Notable among these measures includes the liberalization of the economy, provision of guarantee on repatriation of profit, tax incentives and provision of critical infrastructure as well as cheap labor (Chakrabarti, 2001). Among these measures, tax incentives by far have been a common place for most governments as it provides direct relief to firms and place them in a strategic position thus facilitating their performance and their stay in business. Relief takes the form of personal allowance, investment allowance, loss relief, roll-over relief, pioneer relief and exploration relief, to mention but a few (Peters and Kiabel, 2015).FDI is necessary for promoting development in emerging market countries like Ghana and could benefit from fundamental research. The role of FDI is deemed one of the contributory pillars of economic growth (Halil Kukaj, 2016). The contributory role of FDI inflows in the economic development of Ghana cannot be overemphasized. The data shows that over the past decades, there has been consistent progressive increase in FDI inflows into the Ghanaian economy. For instance, according to the Ghana's Balance of Payments database, the net inflows of FDI into the country have fluctuated between $18,260,970 in 1976 and $3,485,333,000 in 2016. Clearly, this shows an upward trend and considering the most recent data available, it even reinvigorates further the importance of this particular indicator to Ghana's development.Per the records of the World Bank as quoted by the Ghana Investment Promotion Centre (GIPC), in 2012 Ghana recorded FDI inflows of about 8.09% of GDP. Additionally, Provisional FDI figures from January to September, 2018 hit 1.3 billion dollars representing 100% of the value recorded same period in 2017, being the value of projects registered with the Centre (GIPC) by foreign investors. There are opposing opinions available in literature regarding the extent to which FDIs can derive economic growth in developing countries like Ghana. However, FDIs have a straight consequence on local and regional economic growth by contributing to capital accumulation as well as enabling innovation and technology transfers to the beneficiary country (Halil Kukaj, 2016).Considering the significant role of FDI in the economic growth of developing economies such as Ghana, it is imperative that the government of Ghana commits to the formulation of policies that would serve as incentives for foreign investors in order to derive FDIs into the economy. As a guide to the Ghanaian government and the Ghana Revenue Authority (GRA) in policy formulation in this regard, an empirical study that examines the impact of corporate tax rates on FDI inflows will be very critical and this study seeks to achieve this.\n2. Theoretical and empirical background: 2.1 Theoretical framework\n3. Research methodology: 3.1 Estimation strategy\n4. Empirical findings and discussions: 4.1 Results for the unit root test\n5. Conclusions: This study aimed at investigating the impact of fiscal incentives on FDI in Ghana. The dependent variable in the study was the FDI variable and the independent variables were fiscal incentives measured with corporate tax rates, market size as measured by the level of real GDP, trade openness and exchange rate. The study found a negative and significant long-run effect of corporate tax rates on FDI inflows but a significant positive short-run effect. The study further found that all the other independent variables such as trade openness, market size and exchange rate although statistically insignificant in influencing FDI inflows in Ghana, conforms to the a priori expectations and the predictions of the eclectic paradigm of FDI in sign.The implication of this study for practice is that it acts as an empirical basis for the Ghana Revenue Service (GRA) to restructure the corporate tax regime in Ghana in order to deal with policy lapses. It also highlights the need for the authority to maintain low corporate rates targeted at non-indigenous companies that are into the production of goods and services for which the indigenous companies have comparative disadvantage in order to attract FDI into the Ghanaian economy. However, the results of the research should be interpreted against some limitations. For instance, the paper did not identify the specific foreign businesses that are more deserving of a low corporate rate and to what extent can that boost FDI inflows in Ghana. With Ghana as a signatory to the Economic Partnership Agreement (EPA) and a member of the most recently adopted African Continental Free Trade Agreement (AFCFTA), a future study that identifies the segment of foreign companies that deserve low corporate rates would be helpful in directing the GRA to enact corporate tax policies that would not make Ghana a potential dumping place for goods and services that can be produced domestically.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of the paper was to investigate the role of fiscal incentives in driving foreign direct investment (FDI) inflows into the Ghanaian economy based on data from 1975 to 2017 with the Eclectic paradigm as the theoretical basis. FDI inflows was the dependent variable whiles trade openness, corporate tax rate, exchange rate and market size were the independent variables with corporate tax rate as the main explanatory variable of interest.\nDesign/methodology/approach: The autoregressive distributed lag (ARDL) bounds test technique was employed to investigate Cointegration in the model. The results showed the presence of cointegration among the variables.\nFindings: The results revealed that corporate tax rates have a significant negative impact on FDI inflows into the Ghanaian economy in the long run and significant positive impact on FDI inflows in the short run. In the context of Ghana, the positive short-run relationship observed is attributed to the lag effect of tax policy on FDI inflows.\nResearch limitations/implications: One obvious limitation of the research is that, it does not identify the specific foreign businesses that are more deserving of a low corporate rate and to what extent can that boost FDI inflows in Ghana. Another limitation is that the data analyzed in the paper is exclusively for Ghana and the findings may not be generalized for other countries.\nPractical implications: Based on the research findings, it is recommended that the Ghana Revenue Service (GRA) restructures the corporate tax regime in the country to deal with the policy lapses. It is also recommended that low corporate rates should be maintained especially in respect of foreign companies that are into the production of goods and services for which indigenous companies in Ghana have a comparative disadvantage in order to drive FDI into the Ghanaian economy.\nOriginality/value: This paper is unique for providing up to date and dynamic insights into the tax incentive and FDI nexus in the Ghanaian context.\n",
"role": "assistant"
}
] |
Perception of appearance and use of appearance products among Chinese women
|
[
"Marketing",
"Fashion industry",
"Consumer behaviour",
"China"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: China garnered the attention of international fashion marketers beginning in the 1990s as economic reforms led to the desire for luxury goods among the growing ranks of wealthy city dwellers. The rapid and vast socio-political-economic changes within China that led to this increase in consumer demand challenged fashion marketers to understand the nuances of the newly developing Chinese taste. One target of this research has been to determine the generational differences in preference for and use of appearance-related products. For example, Rocha et al. (2005) found that lifestyle and identity are highly salient factors that differentiate older Chinese women from younger women and affect their consumption of fashion goods.To assist fashion marketers in understanding the identity factors that underlie Chinese women's clothing preferences and choice, the research focuses on differences among Chinese women of three generations:1. those who came of age during the Cultural Revolution;2. those who came of age during the early-to-mid years of the economic reforms of Deng Xiaoping; and3. those who were born after China's one-child policy was implemented and came of age during China's rapid economic growth of the late twentieth and early twenty-first centuries.\nLiterature review: Consumer research\nResearch question: Due to the exploratory nature of the research, the following question was posed:RQ1. Do age cohorts among Chinese women differ in terms of their perception of appearance and use of appearance products?\nMethodology: Collecting data in China is almost impossible for non-Chinese researchers without having Chinese insiders to guide the process and negotiate the concepts of mianzi (social face), quanxi (the construction of close family relationships), and yuarn (the facilitating factor that brings people and/or events together so that goals can be accomplished) (Sin and Ho, 2001; Chan, 2008). For, without establishing a personal link between Western researchers and Chinese subjects, the subjects will not participate. Thus, a Chinese colleague in Beijing was brought into the research early on and assisted with the design and implementation of the project.McGorry (2000, p. 74) emphasizes the importance for cross-cultural researchers to \"become more attuned to social and cultural backgrounds in order to better assess global consumers' needs\". She deems it critical that qualitative data be collected before collecting quantitative data. Given the magnitude and unknowns of researching generational differences of women's perception of appearance in China and finding no similar projects on which to draw early inferences, focus group interviews were held. It was deemed best to hold the interviews in Beijing. Although Shanghai is often thought of as the fashion capital of China, Beijing recently has distinguished itself in the fashion arena (Hume, 2008).Stening and Zhang (2007, p. 129) suggest that in a highly stratified society, such as China, where one's hierarchy is correlated with one's power, \"who is included in the sample takes on greater significance\". Thus, women with high social status and a high degree of personal/economic power were selected by the Beijing colleague to participate in the group interviews. Such women would have the economic ability to purchase appearance items of their choice and, because of their social status, would be considered fashion opinion leaders by other Chinese.It was advantageous to hold intensive interviews with small groups of women rather than the eight to twelve participants usually associated with consumer-related focus groups. A total of nine women were interviewed - three in each age group. First, having only three respondents per group made it easier for the researchers to gain their respect. Second, because of the number of questions that the researchers needed to ask (ten) and the desire for in-depth rather than quick responses, a small group was desirable. Finally, because the conversations had to be translated, the two-hour time frame per group did not permit for a large number of participants.Based on Strauss and Howe's (1999) model of determining generations by shared, seminal events during the formative years, respondents were grouped into three cohorts. Women in the older group were born during the founding of the People's Republic or shortly thereafter (1943-1958) and came of age during the Cultural Revolution. The middle group was born prior to and during the Cultural Revolution (1959-1973) and came of age during China's socio-economic-cultural shifts and the transition to more individual freedoms (such as choice of clothing). The younger women were born during the early years of the economic/political shifts (1974-1988) and came of age as China's market economy began to boom, as the internet became available, and as China opened its borders to international consumer goods and media. Most are only children. Like their counterparts across the globe, this group tends to be self-centered and self-oriented. They are materialists who value social and economic independence and are media and technologically savvy (Hume, 2008; Kaigler-Walker and Gilbert, 2009; Lee et al., 2004).Instrument development\nFindings: The MANOVA analysis of the Beijing and Shanghai subjects yielded significant results. Consequently, six one-way ANOVAs were used to further define the sources of the group differences (Table II). Four of the six scales produced significant differences: Factor II sources of fashion information, Factor III, happiness in purchasing new apparel items, Factor IV, confidence in one's appearance, and Factor VI the preference for purchasing luxury brands. Tukey post hoc analyses indicated which differences between age cohorts were significant on each of the four scales. Table III lists the results, while Figures 1-4 illustrate the relationship between the scales and age.Factor II - sources of fashion information - varied significantly between the young women and the middle-aged women (p=0.030), the young women and the older women (p=0.000), and the mid-aged group and the older women (p=0.022) (Figure 1). By far, the younger subjects were more attentive to and used fashion media to a greater extent than their older counterparts. As Hume (2008) writes, these young women are consummate shoppers who grew up with international media and the Internet at their disposal.Factor III - happiness - varied significantly between the young women and the middle-aged women (p=0.001) and the young women and the older age group (p=0.003), with the young women expressing a significantly higher degree of happiness in purchasing new apparel items than both the middle and the older women. When asked during the focus group interviews what made them happy the young women responded \"having new clothes\". The middle-aged respondents reported being happy when purchasing new apparel items, but also wanting to look younger. The older women's responses had to do with receiving compliments on their appearance, confidence in appearance, and looking younger and feminine. Thus, while happiness with appearance was a salient factor for all three age groups, that which produced the happiness varied.Factor IV - confidence - varied significantly between the young women and middle-aged women (p=0.001) and the young women and older women (p=0.023), with the older women expressing a higher degree of satisfaction with how they looked than the middle and younger aged groups. Kaigler-Walker (1993) reported the same pattern with American women subjects of similar ages. Therefore, young women's lack of satisfaction with appearance and a greater satisfaction among the older subjects may be attributable to a universal phenomenon rather than being solely a Chinese cohort effect.Factor VI - preference for luxury brands - varied significantly between the mid-aged subjects and the older subjects (p=0.009) and approached significance between the younger and older subjects (p=0.080). The mid-aged subjects displayed greater preference for purchasing and wearing luxury brand apparel than the other two groups. These women came of age during the China's change from the planned to the market economy when foreign luxury goods became available to the emerging wealthy class. Thus, these women developed a taste for luxury goods (Fenton, 2008) early on. Fenton (2008) writes that for this group of women, \"fashion represents obviously status, but luxury is also a kind of bridge to the modern world for them\" (p. 13). The All China Women's Federation (2007b) reported that although women aged 20-50 are the primary consumers of luxury goods, those with senior positions (i.e. the highest incomes) spend the most on high fashion items.A difference among the cohorts was not found for Factor I, the desire to look womanly. However, its value to understanding Chinese women's perception and use of appearance-related products cannot be overstated. In each of the focus group interviews the respondents discussed the desire to be feminine, to be elegant (Kaigler-Walker and Gilbert, 2009). While to the Western ear the term \"elegance\" would indicate a level of sophistication, and perhaps a degree of formality, the term in Chinese means having a feminine body shape and looking nice. Wanting to look elegant under-girded young women during the Cultural Revolution to alter their uniforms to fit more snugly at the waist so as to show off their curves - a behavior reported by the participants in the focus group of older women and discussed at length by Ip (2003).The lack of difference found among the cohorts on Factor V - dissatisfaction with appearance - is more difficult to assess. That the factor was extracted and is reasonably reliable (a=0.62) indicates that the concept of body/appearance dissatisfaction was of concern to the subjects equally across the age groups. However, this differs from Kaigler-Walker (1993) finding that among American women, those in their middle years are the least satisfied with their appearance, followed by young women. Further study is warranted.\nConclusion: The research sheds light on differences among Chinese women of three generational cohorts as to their perception of appearance and use of appearance products and opens the door for further study. Generating the six factors provides a starting point for identifying the overarching cultural domains of Chinese women's perception. Finding differences on four of the six factors among the three age cohorts adds further understanding of the uniqueness of the three consumer groups. Marketers would be well advised to consider both the similarities and the differences among Chinese women across the age spectrum in planning, producing and promoting fashion products.\nImplications and future research: The research barely assesses the tip of the iceberg regarding Chinese women's perception of appearance. The researchers need to make further analysis of the data collected that goes beyond just generational differences among the subjects and analyzes the data in a way that could indicate cross-cultural differences that exist among women in China, Taiwan and Hong Kong. In addition, research needs to be conducted on additional socio-economic factors such as social status, education and geographical regions within China outside of Beijing and Shanghai. The researchers have extracted a 24-item survey from the six factors and plan to gather data in cities such as Guangzhou, Hangzhou, Xi'an and Harbin.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to determine and assess generational differences in Chinese women's perception of appearance and appearance products that can be explained by the political, economic, and social policies of China over the past 40 years.\nDesign/methodology/approach: - This is an exploratory research using focus groups and a survey. Three age cohorts of Chinese women, those who came of age during the Cultural Revolution, those who came of age during the early and mid years of China's economic reforms, and those born after the implementation of China's one-child policy and the market economy, are assessed as to their perception of appearance and use of appearance-related products. Six cohort differences that could be attributable to these socio-economic-political events are found.\nFindings: - Significant differences between the age cohorts are found on four of six appearance-related factors - sources of fashion information, happiness in purchasing new apparel, confidence in appearance, and preference for purchasing luxury goods.\nResearch limitations/implications: - Although the study identifies cultural and generational constructs of Chinese women's perception of appearance and use of appearance products, due to its exploratory nature the findings are not generalizable to the larger Chinese community. In addition, to provide a more complete understanding of the generational differences, future research needs to take into account additional demographic information such as education and income.\nOriginality/value: - No previous study has attempted to determine the impact of China's economic, social, and political policies on generational cohorts of Chinese women as to their perception of appearance and use of appearance products. Fashion marketers will benefit, as understanding these powerful determinants of purchasing behaviour are paramount to successfully planning, developing and marketing of apparel products.\n",
"role": "assistant"
}
] |
The e-government paradox in post-Soviet countries
|
[
"Kazakhstan",
"E-government",
"Post-Soviet countries"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Public services improvement is a significant component in the agenda of developing countries, not least because citizens judge the progress of development through the lens of better education, healthcare, social welfare, public housing and so on (Knox, 2019). Improvements in key public services often give citizens a greater sense of well-being and awareness of a better quality of life. One important aspect of the public services improvement agenda is the use of electronic government, about which many advantages are offered (Weerakkody et al., 2012). For developing countries, the persistent issue of corruption can be reduced, at least in part, through removing personal transactions between officials and service users. Citizens witness faster, more efficient, services that avoid several exchanges of documents with multiple state agencies that may have different and complex bureaucratic requirements. Technical developments in electronic government are opening up new linked services using personal data gathered from service users. Children registered at birth, for example, can be allocated kinder garden and school places, allowing better strategic planning by state agencies.\nE-government research in developing countries: The World Bank defines e-government as:The use of information technologies that have the ability to transform relations with citizens, businesses, and other arms of government. These technologies can serve a variety of different ends: better delivery of government services to citizens, improved interactions with business and industry, citizen empowerment through access to information, or more efficient government management. The resulting benefits can be less corruption, increased transparency, greater convenience, revenue growth, and/or cost reductions.(World Bank, 2015a, p. 1)\nE-government in Kazakhstan: Kazakhstan is a Central Asian country which gained independence from Russia in 1991 and is the largest landlocked country in the world, about the size of Western Europe, but with a relatively small population (18.04 m). It has borders with China, Kyrgyzstan, Russia, Turkmenistan and Uzbekistan and will be a key route in the One Belt, One Road initiative. It has achieved significant economic success moving to become an upper middle-income country in two decades largely through its natural resources base. Despite being an authoritarian state, it is viewed as a progressive governance reformer in the region. The e-government programme was launched in Kazakhstan following the decree of the president in 2004 to fulfil three key objectives: to provide fast access to public services; to improve public services' effectiveness through the widespread use of ICT in the public sector; and to minimise personal contact between customers and government officials in order to reduce administrative corruption.\nMethods: The methods used for the study comprised several waves of primary data collection: 6 focus groups, 25 interviews and participant observation with central and local government officials, civil society organisations and the business sector (see details in Table I). Collecting data in Kazakhstan (and other Central Asian countries) is particularly difficult for a number of reasons. Central government bodies tend to be highly legalistic and closed in terms of access to information, most of which would be readily available in western countries. They require formal letters of invitation from researchers which can take a long time to process, are suspicious of their motives, and unwilling to engage in any criticism lest it results in \"punishment\" and negatively impacts on their career progression. There are also methodological concerns about the internal and external reliability of the qualitative data presented in the research. In terms of the latter, replication studies might well test the features of the e-government paradox outlined in this research. Internal reliability is less problematic given that the data were collected via a small team and hence there was an agreement on what was seen and heard. Recorded interviews and observations limit the issues of internal reliability but can also act as a deterrent to the \"truth\" (the problem of socially desirable responses). Core questions for our research participants included, but were not limited to, the following:\n\"Freeze all but the technology\": The experience of \"freezing all but the technology\" as a key feature of the e-government paradox refers to the impetus for technology innovation, the latest and most expensive equipment, and a penchant for digital superiority amongst neighbouring countries. A core pillar of Digital Kazakhstan (2017, p. 5), for example, is transformation of the state's infrastructure through \"ground-breaking technologies to provide services for the business sector and general population\". One of the key projects cited under the thematic area \"transformation to a digital state is e-Health\" promises: artificial intelligence in terms of diagnosis and management of treatment plans; mobile health and remote consultations; and, e-health passports for the citizens of Kazakhstan. This, when doctors are poorly trained, lack basic equipment, are badly paid and, as a result, engage in corrupt practices to supplement their incomes.\nTransaction rather than transformation: This feature of the e-government paradox links directly to the transactional nature of the ways services are defined and provided in post-Soviet countries. Services typically rely on documentary exchange, official stamps, and ambiguity around how public services are defined. In OECD countries, public services are broadly defined as those services which are paid for from the public budget and delivered through government bodies or contractually via the private or third (NGO) sectors. However, there is ambiguity in what constitutes \"public services\" in Kazakhstan and, more widely, in CIS countries. In Kazakhstan, as well as in other CIS countries, public services are defined as those services which require an exchange of documents (registration of car, issue of birth certificate or passport, land registration, etc.). The definition of public services is highly legalistic and convoluted. For example, Article 1 of the \"Law On Public Services\" in Kazakhstan (5 April 2013: No. 88-V) provides the following definition (translated from Russian):Public service is one of the forms of implementation of separate state functions performed individually by a request of service recipients and aimed at implementation of their rights, freedom and legal interests, provision of appropriate material or non-material benefits to them.\nIsomorphic mimicry: Post-Soviet countries are being actively encouraged by international organisations to not only adopt e-government, but also to move beyond it towards more inclusive government, a form of isomorphic mimicry aimed at emulating western style liberal-democratic practice (DiMaggio and Powell, 1983). Some scholars have described the concept of e-government as part of a wider agenda leading to greater democratisation (Perri 6, 2004). Large donors such as the World Bank, UNDP, OSCE, European Union and the Asian Development Bank have stressed the need for greater inclusiveness in the decision-making process by citizens as a condition of funding good governance initiatives. Focus group participants, particularly government officials, expressed concern about this wider goal:While we see the merits of e-government in improving services to our citizens particularly when it comes to reducing corruption and the faster exchange of documents, we don't want it to be a means of threatening the stability of our country. We have seen colored revolutions in Georgia, Ukraine and Kyrgyzstan. We are proud that Kazakhstan is a peaceful country with many ethnic groups living harmoniously. There should be no political agenda with e-government.\nImpact on corruption: E-government has played an important role in tackling corruption largely due to reducing or obviating the need for official-citizen interface where bribes are commonplace. This appears to be the case not just for Kazakhstan but also across post-Soviet countries. Using data from the Worldwide Governance indicators on the control of corruption and UN e-government index from 2003 to 2018 across 15 post-Soviet countries, we examined the relationship between e-government and the control of corruption. The data show a strong positive correlation (0.610) between control of corruption and e-government and a co-efficient of determination of 0.37. This indicates that 39 per cent of the variance in the control of corruption is explained by e-government.\n(Half)-open government: Focus group participants were given some basic background information on the success story of e-government in Kazakhstan. In 2012, Kazakhstan was acknowledged by the UN as a \"leader in e-participation\" and placed in second position (out of 193 countries) in terms of e-participation along with Singapore, behind only two developed countries, the Netherlands and South Korea (see Figure 3). This was in recognition of the government's attempt to engage with the public through the open government portal which provided information in five major areas: open data; open legislative acts; open budgets; performance assessment of government bodies; and open dialogue. In particular, the open dialogue component presented a platform for interactions between the government and citizens through blogs of ministers and akims. Interactions between citizens and state bodies included: comments on specific government policies, participation of citizens in the opinion surveys and a platform for recording complaints.\nDiscussion: So, what explains the e-government paradox? Our fieldwork suggests a number of factors but the overarching explanation is that Kazakhstan craves international respectability, despite being an authoritarian state, and wants to be seen as a serious actor on the world stage. In that sense it is Janus-faced. Its outward-facing policy, explicitly stated, is to become one of the top 30 developed countries by 2050 and, hence, many of its formal public policies must \"read well\" in the international arena. For example, it is working with the OECD and spends a large amount of money on technical support from their experts with the ultimate aim of becoming an OECD member state. It is a member of the United Nations, the first Central Asian country to join, and was elected as non-permanent member of the UN Security Council (2017-2018). Through a multi-vector foreign policy, it has assumed geo-political significance with Russia and China in the East, most recently as a key player in the One Belt One Road initiative, and the USA and European Union in the West. It also maintains cordial relations with the Islamic world. Kazakhstan began a member of the OSCE and chaired the Organisation of Islamic Conference. Most recently, Kazakhstan became a member of the WTO in 2015, hosted Expo in 2017, a major worldwide future energy event, and has facilitated or mediated peace building talks on international conflicts (Russia and the Ukraine; Russia and Turkey; and Syria).\nConclusions: Moving beyond the parameters of Kazakhstan, particularly in post-Soviet authoritarian states, e-government is legally and politically circumscribed to a narrow range of public services and a concern that it promotes an international agenda which leads inexorably to e-democracy. The latter carries with it expectations of more deliberative processes of decision making with greater citizen engagement, a stronger more embedded civil society, greater access to information and a free media, many of which are anathema to authoritarian states. The e-government paradox not only displaces a concentration on \"human (social) services\" in education, health and social welfare but, more fundamentally, can be seen as a western model that ultimately promotes open government and, in so doing, challenges the core values of some post-Soviet countries.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to investigate the emergence of e-government in post-Soviet countries using Kazakhstan as a case study. Extant research on e-government in developing countries highlights significant benefits including improved public services, reduced corruption, and more open and inclusive government. The paper asserts the presence of an e-government paradox which limits its potential to improve public services.\nDesign/methodology/approach: Primary data were collected from a number of sources: 6 focus groups with central government agencies, local authorities and civil society organisations; 25 structured and semi-structured interviews; and participant observation.\nFindings: The research finds evidence of an e-government paradox in five forms: an emphasis on technological development; transactional services are faster but have displaced attention from core public services; petty corruption has been reduced but grand corruption remains; isomorphic mimicry; and greater participation by citizens has been limited.\nResearch limitations/implications: The focus of the research is Kazakhstan. Applying the lessons learned to other post-Soviet countries has limitations given their different stages of development since independence.\nPractical implications: The key practical implication of this research is that countries can become absorbed by e-government technology without questioning the fundamental business model which underpins how public services are delivered. Ultimately, this impacts on the social value of e-government.\nOriginality/value: While existing research has examined how e-government has been implemented in developing countries, this paper focusses on Kazakhstan as an authoritarian state with wider implications for post-Soviet countries.\n",
"role": "assistant"
}
] |
Examining alignment between supplier management practices and information systems strategy
|
[
"Supplier management practices",
"Information systems strategy",
"Alignment",
"Supply chain management",
"Information systems",
"United States of America"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1 Introduction: Firms are increasingly trying to find ways to improve supply chain performance by managing key information processing requirements (Gunasekaran and Ngai, 2004). To do this, they need to adopt specific information systems (IS) that support the processes and activities of their supply chains. A key source of information processing requirements of any focal firm emanates from supply chain processes that are upstream to it and that interface with its suppliers (Gunasekaran et al., 2004; Spekman et al., 1998). Hence an important aspect of managing the supply chain's information processing requirements lies in managing those associated with supplier interfaces and supplier management practices. That is, in order for them to be effective and enhance supply chain performance, IS that a focal firm deploys in its supply chain should support and enhance the objectives and goals of supplier management practices. A good example of this is Wal-Mart's use of IS to support its lean supplier practices. Wal-Mart has made significant investments in inventory control, materials management, point-of-sale integration, and inventory tracking radio frequency identification technology systems. These systems help suppliers in their production planning and order fulfillment in the context of Wal-Mart's orders and provide up-to-the-minute information on inventory and logistics, leading to higher inventory turnover for Wal-Mart, and reduction of its labor and inventory costs; and thus supporting the company's low-cost supply chain (Nickles et al., 1998; Jones et al., 2005).The support of IS to supplier practices requires specific IS applications to be aligned with the appropriate supplier management practices. If IS is to be successfully integrated into supplier practices, firms should be able to analyze how information processing needs of the focal firm's supplier practices can be supported by specific IS applications. Thus, the research question we examine in this paper is:[...] how can a focal firm align its supplier management practices with an appropriate IS strategy and what is the impact of such alignment on supply chain integration and supply chain flexibility?Although the supply chain-IS interface literature offers ideas on how IS can be used in the supply chain (Shah et al., 2002; McLaren et al., 2004; Gunasekaran and Ngai, 2004) to achieve process integration and enhanced firm performance (Rai et al., 2006), there are no studies that explain why or how specific IS applications can be gainfully aligned with different types of supplier management practices. Thus, this study extends the literature that is beginning to recognize the importance of understanding the benefits of different types of IS applications to supplier practices.In this paper, we use the contingency theory to examine the alignment between supplier management practices (that is, lean supplier practices and agile supplier practices) and IS strategies (that is, particular types IS applications portfolios), and its effects on the extent of integration and flexibility of the supply chain. We theoretically develop and empirically examine a framework proposing that the respective fit or alignment between the \"lean\" and \"agile\" supplier practices and the \"IS for efficiency\" and \"IS for flexibility\" IS strategies, is associated, respectively, with enhanced integration and flexibility of the particular supply chain. Our findings are based on statistical analysis (which included a pilot study, exploratory and confirmatory factor analysis, and structural equation modeling (SEM)) applied to survey data from senior managers (from the purchasing, operations, supply chain functions) from 205 firms.Section 2 first describes the theoretical background from the literatures on supplier management practices, IS strategy and IS-supply chain alignment. It then develops the hypotheses examined in this paper. Section 3 provides a description of research methodology and analysis. Section 4 contains a discussion of the results and their theoretical and managerial implications. Section 5 concludes with a discussion of limitations and future research.\n2 Theoretical background and hypotheses development: 2.1 Supplier management practices, information systems strategy, supply chain integration and supply chain flexibility\n3 Methodology: 3.1 Questionnaire development and pilot study\n4 Discussion: The result of H1a and H1b are found to be significant (t-statistic=5, and t-statistic=2.3), respectively. The R2 of supply chain integration=0.356 indicates a good amount of variance explained by the model. The significance of H1a indicates that for a focal firm, the higher the extent of efficiency or leanness in its supplier practices, the greater the extent of its integration with supply chain partners.H1b indicates that IS for efficiency positively moderates the relationship between lean supplier practices and supply chain integration. In particular, if a focal firm is planning to cut its cost by creating the most cost efficiency in its supplier management practices, then the firm should also plan and invest in IS to support and enable implementing its lean supplier practices. Such alignment between lean supplier practices and IS strategy for efficiency will result in better supply chain integration. In other words, for a focal firm, acquiring specific IS applications that help in improving and managing production planning to minimize inventory levels, enables the suppliers to adjust their production schedules, and plan ahead for the component needed for production. Sharing this information with suppliers will result in better coordination between firms and their suppliers, allowing for better support of lean supplier practices such as reducing lead times. This would result in lower production costs and greater cost efficiencies.The relationship for H2a is found to be significant (t-statistic=4.2). This implies that for a focal firm, planning to respond quickly and effectively to customers' needs is determined by the degree of agility of its supplier practices. The greater the extent of agility in its supplier practices, the better the supply chain flexibility. The relationship for H2b is found to be non-significant (t-statistic=1.2). The R2 of supply chain flexibility is equal to 0.21. This result indicates that IS for flexibility does not moderate the relationship between agile supplier practices and supply chain flexibility. The findings suggest that adopting IS applications that represent an IS for flexibility IS strategy does not enhance the ability of the firm to execute its agile supplier practices, and therefore does not help in improving the flexibility of its supply chain. One explanation for this result could be that firms may be unable to quickly implement the applications that are required in support of agile supplier practices. Thus, even if they plan to implement IS for flexibility in alignment with agile supplier practices, lack of quick implementation might prevent accrual of benefits from such alignment. For example, Nike acquired supply chain software from i2 Technologies to help in its agile supply chain practices. However, Nike moved to adopt the new application without appropriately managing the implementation of the software. The result was a $100 million loss in their revenue (McLaren et al., 2004).4.1 Theoretical contributions\n5 Conclusion, limitations and future research: The outcome of this study suggests that aligning supplier management practices with IS strategy will result in beneficial outcomes for the supply chain. There are limitations that need to be kept in mind while generalizing the results of this study. First, it considers primarily internal supply chain integration as a representative of the entire supply chain integration (internal and external). Second, the study is done at the firm level with one person from each organization responding to the survey. A single respondent was asked questions about supplier practices and IS strategy. Although the respondent was a senior purchasing manager/director, or supply chain manager/director, he or she may not have been aware of IS-related matters and thus may not have been in a position to answer the IS strategy-related questions. Future research can takes a global perspective and test this model from the perspective of supplier practices in global supply chains. Global supply chains are more complex in that they have to deal with suppliers and customers from different geographical regions that vary in terms of infrastructure, market demands, supplier sophistication, and IS sophistication. Aligning supplier management practices with corresponding IS strategy is thus expected to present greater challenges that would require additional theoretical and empirical examination.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this study is to examine alignment between supplier management practices and information systems (IS) strategies (i.e. particular IS applications portfolios), and its effects on supply chain integration and supply chain flexibility.\nDesign/methodology/approach: - The paper develops a model and hypotheses suggesting that \"lean\" supplier practices, when aligned with the \"IS for Efficiency\" IS strategy, have a positive association with supply chain integration. Similarly, \"agile\" supplier practices when aligned with the \"IS for Flexibility\" IS strategy, have a positive association with supply chain flexibility. The paper empirically validates the model using survey data from directors and senior managers in purchasing and supply chain functions from 205 manufacturing firms in the USA. Factor analysis was conducted to test convergent and discriminant validity. Hypotheses testing was done via structural equation modeling (SEM) analysis using SMART Partial Least Square (PLS) software.\nFindings: - The study finds that lean (agile) supplier management practices are positively associated with supply chain integration (flexibility). Further, alignment of lean supplier practices and IS for Efficiency enhances supply chain integration, as assessed by a positive moderating effect of IS for Efficiency on the relationship between lean supplier practices and supply chain integration.\nResearch limitations/implications: - The study contributes to the supply chain - IS interface literature by developing a theoretical basis for analyzing the benefits of different types of IS applications to particular types of supplier management practices. Limitations are that it considers primarily internal supply chain integration as a representative of the entire supply chain's integration (internal and external), and that it relies on data from one person from each organization responding to the survey.\nOriginality/value: - The paper theoretically develops and empirically examines a framework proposing that the respective fit or alignment between lean and agile supplier practices with efficient and flexible IS application portfolios has implications for the integration and flexibility of the particular supply chain.\n",
"role": "assistant"
}
] |
Author-rated importance of cited references in biology and psychology publications
|
[
"Publications",
"Biology",
"Psychology",
"Authorship",
"Information retrieval"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In scholarly research, cited references often play an important role in the conceptual formulation and methodological operationalization of a study. de Mey (1982) suggests that bibliometric processing and quantitative mapping of both citation networks and co-citation clustering can be used to outline the cognitive structure of a field of knowledge, to identify the intellectual affinity and descent, and to detect the areas of specializations and invisible colleges or author groups. Citation analysis focuses on the investigation of the relationship between the citing and cited documents or \"the links expressed in the references\" (de Mey, 1982, p. 124). Consequently, citation research has centered on examining quantitative patterns of in-degree citations or out-degree references, the configuration of citation connections, and the distribution and formation of topic clusters and author networks. In contrast, there are few behavioral-based citation studies that explore the importance of a cited reference based on the viewpoint of the citing author.The present study investigates authors' own assessment of citation importance by involving the primary authors of recent biology and psychology research articles. Authors rated the importance of each citation, and these ratings were then examined as a function of multiple text features (frequency, length, location, and treatment) of the reference in the citing document. Participants' evaluation of each cited reference also included an indication of the reason for citing and the relationship with the cited authors. The overall goal of this investigation was to understand the citation behavior from the citing author's point of view and to identify a set of effective predictors of citation importance to develop and improve citation-based information retrieval systems.\nPrevious research: Previous citation research has addressed a variety of issues such as document retrieval, citation functions, and textual properties. Research on scientific discourse has investigated the composition structure of science papers.Citation-based information retrieval\nMethodology: The purpose of the study was to examine the association between the dependent variable of author-rated citation importance and three separate sets of independent variables: elements of citation properties; reasons for citing documents; and relationships between the citing and cited authors. Elements of citation properties include textual characteristics of the cited reference such as citation length, citation frequency, citation location, and treatment of citations as per Bonzi (1982). The reasons for citing references were a list of seven key reasons generated based on both the previous research (e.g., Garfield, 1965; Shadish et al., 1995) and the input of the pretest participants. Similarly, types of author relationships were also generated based on a combination of the previous research (e.g., Shadish et al., 1995; Case and Higgins, 2000) and pretest participants' suggestions. See the sample survey page in Figure 1 for the seven statements of reasons and five types of author relationships.Participants were authors of a recently published empirical article in either biology or psychology. Using his/her own article, each author evaluated the importance of all cited references, selected the best reason for citing a given reference, and indicated his/her relationship to the authors of the cited reference. Because the references were all nested within the context of a single article, the data for individual references were not statistically independent of each other. The data can, however, be analyzed at both the level of the individual reference (which we call Level 1) and at the level of the entire article (Level 2). For example, in trying to predict the author's rated importance of a given citation, we analyzed the impact of individual reference level (Level 1) variables, such as that citation's frequency, length, location, and treatment, as well as article-level (Level 2) variables, such as the author's discipline, gender, rank, and institutional status. In addition, for every Level 1 variable, such as frequency or citation length, we calculated the mean value of that variable for all citations in the article. These mean values, such as mean frequency or mean citation length of all references in the article, also represented article-level (Level 2) variables. Multi-level linear modeling was used to reflect the impact of the two levels for independent variables. Table I lists the Level 1 and Level 2 coded predictors we used for the data analysis.Research questions\nResults: The data analysis produced a rich set of results and the following text summarizes some main findings including analyses of citation importance, citation treatment, and author relationship.Description of the sample data\nDiscussion: The present study attempted to \"step inside the author's head\" (as per Shadish et al., 1995) by asking biology and psychology authors to evaluate every cited reference in their articles. The results advance citation research theoretically as well as suggest possible implications for retrieval system development. Even though there were substantial disciplinary differences in the location of references in the introduction and discussion sections of the citing article, there were surprisingly few differences in importance rating due to discipline, rank, affiliation, or gender. Such lack of differences may be due in part to our selection of relatively lengthy empirical articles, which generally followed the IMRD format. Lengthy articles with more than 35 references might not be representative of the core research literature of biology or psychology. The overall lack of disciplinary differences suggests that in terms of citation patterns and referencing structure of the field, psychology is more similar to natural science than to social science disciplines (Small and Crane, 1979).The study found that textual properties such as citation frequency, length, location, and treatment were all significant indicators of citation importance. However, there were at least two common limitations concerning the impact of textual properties. First, there were often \"pond\" effects such that the impact of a textual property on importance was lessened when there were relatively many citations with that textual property. Second, there was also a reduction of impact for self-citation or other close relations between citers and citees. To illustrate these limitations, the more frequently a reference was cited, the more important it was judged, but this relationship was weaker if there were relatively many multi-cited references in an article, and if the reference was a self-citation. The latter finding suggests that participants used a different process for judging affectively \"hot\" references - their own work and that of authors in their immediate professional network -than for judging \"cold\" references - work by authors who were outside their immediate professional network. This result is consistent with previous research on how people relate magnitude to the value of a stimulus (Hsee et al., 2005). A subjective, \"valuation by feeling\" tends to describe judgments about self-citations or references by citees whom the citing author has a close relationship with, whereas a rule-based \"valuation by calculation\" tends to describe judgments of non-self-citations.References cited for reasons of \"general background\" and \"suggest limitations\" or \"future research\" were judged as less important than references cited for other reasons. References cited for conceptual ideas or reasons of methodology/data were rated as more important than those not cited for these reasons.In this study, we used both textual properties and author's reasons for citations to define perfunctory citations. Based on Cano's (1989) finding that the largest proportion of perfunctory citations occurred in the initial 15 percent of the citing document, we defined perfunctory citations as those that occurred only once and in the introduction, and were considered as \"general background\" citations by the author. We also found that the two textual properties (citation occurred only once and in the introduction) served as a good predictor of references cited for \"general background.\" In contrast, we defined organic citations as references cited for \"conceptual ideas\" and \"methodology and data\" reasons. The analysis revealed that perfunctory citations were judged as less important both at the individual reference level and at the citing article level. Meanwhile, organic citations received highly positive ratings in citation importance at both Level 1 and Level 2. There was a \"pond effect\" limitation such that the relationship between organic citations and importance was stronger in articles where there were relatively few organic citations.Interestingly, more than 80 percent of the references in both biology and psychology were never explicitly described in the text, or, using Swales' (1990) terminology, were never integral references. The overwhelming number of non-integral citations in biology and psychology articles may indicate that Swales' binary classification does not represent a useful description of references cited in scientific empirical articles. Perhaps the integral/non-integral dichotomy better describes citations in the humanities. Nonetheless, this lack of explicit citations suggests the difficulties in extracting citation passages meaningfully for automatic retrieval, as the majority of the citations are embedded in the text without independent passages devoted to describing the particular contribution of a cited reference.References cited for general background were less likely to receive in-depth treatment whereas references cited for conceptual or methodology/data reasons were generally more likely to be treated thoroughly. Authors cited more frequently, as well as gave more in-depth treatment to a work by an author with whom they held a close relationship. Self-citations were more likely to be cited for conceptual ideas and methodology/data, and they were less likely to occur only in the introduction section of an article. Our results revealed that although self-citations may be an act of personal promotion or a part of rhetorical strategy as per Hyland (2003), they were uniformly judged by authors as important to the citing document. Self-citations contributed to the core conceptual themes and research framework of the citing article.An author's judgments of citation importance may not necessarily serve as the optimal criterion for judging citation value for a particular user, but nonetheless, the results suggest some implications for developing a citation-based document retrieval system. We envision a system that retrieves documents not only based on conventional term-matching mechanisms but also based on citation data and referencing relations. Such a system would need to produce a legitimate relevance ranking of all documents cited in a given work. Considering those factors that are relatively easier to automate, we recommend the following:* Because self-citations were found to be equally important for biology and psychology authors, all citations to oneself or to co-authors should be ranked relatively high on relevance.* Citation frequency was found to be an effective indicator of citation length and citation importance. We suggest giving more weight to references that are cited multiple times in the citing document, especially in those citing documents where there are very few multi-cited references.* References cited for methodology and data reasons were more likely to be treated in depth than references cited for other reasons. We suggest giving high relevance ranking to references cited in the Methods and Results section of the citing document.* References that occurred only one time, and in the Introduction, were more likely general background and judged as less important than references that occurred in other sections of the citing document. We suggest giving low relevance ranking to references cited only once and cited only in the Introduction section of the article.In the somewhat more distant future, a retrieval system might incorporate affective-based ratings, individual reasons for citing, or even social network characteristics. Such a system would implement a user-profile module to allow users to record their preferences in subjective assessment, indicate their professional relationships, and/or personal characteristics in citing.\nConclusion: The study reported here has filled a gap in citation research by investigating author-based citation behavioral patterns. Our approach allowed us to examine closely the associations between citation importance, citation reasons, and relationships between citers and citees. We found that multiple textual features, including citation frequency, length, and location, can predict citation importance. We also found that references cited for conceptual ideas or methodology and data reasons are highly important, as are citations to self and professionally-related others. Our analyses also extended to other areas such as classification of perfunctory versus organic citations, valuation by rule versus by feelings, and the effect of context on the evaluation of individual citations. In addition to having identified those elements that predict the importance of a cited reference, we consistently found that article-level context affected authors' judgments of citation importance. Although context effects are commonly observed in psychology research, their importance has seldom been raised in citation analysis. The statistical technique of multi-level modeling requires an explicit awareness and analysis of such context effects. Furthermore, we found that rule-based judgments of citation importance, such as counting frequency of occurrence, predicted better for non-self citations than for self-citations. This is the first study in citation research to distinguish the effects of rule-based versus affect-based factors on judgments, but this distinction has valuable implications for applying author-rated citation importance. Taken together, the various findings increase understanding of scientific scholars' citation behaviors, which in turn, helps to establish a better foundation for a systematic conceptualization of author-based citation research.Findings of the study have design implications for a citing-data-augmented document retrieval system. Such a system would integrate conventional retrieval principles with citation associations. The ranking of the citation-based retrieval set may be accomplished by giving high importance to references that are multi-cited or self-cited. Further studies are needed to confirm, integrate, test, and implement algorithmically effective elements into a retrieval system. The long-term goal would be to design a citation-based retrieval system that not only automates textual properties and citation passage data but also builds a user profile model for high performance, user-directed, and individual-oriented document retrieval.\n",
"role": "user"
},
{
"content": "Purpose: - The present study aims to investigate how textual features, depth of citation treatment, reasons for citation, and relationships between citers and citees predict author-rated citation importance.\nDesign/methodology/approach: - A total of 49 biology and 50 psychology authors assessed the importance, reason for citation, and relationship to the cited author for each cited reference in his or her own recently published empirical article. Participants performed their evaluations on individualized web-based surveys.\nFindings: - The paper finds that certain textual features, such as citation frequency, citation length, and citation location, as well as author-stated reasons for citation predicted ratings of importance, but the strength of the relationship often depended on citation features in the article as a whole. The relationship between objective citation features and author-rated importance also tended to be weaker for self-citations.\nResearch limitations/implications: - The study sample included authors of relatively long empirical articles with a minimum of 35 cited references. There were relatively few disciplinary differences, which suggests that citation behavior in psychology may be similar to that in natural science disciplines. Future studies should involve authors from other disciplines employing diverse referencing patterns in articles of varying lengths and types.\nOriginality/value: - Findings of the study have enabled a comprehensive, profound level of understanding of citation behaviors of biology and psychology authors. It uncovered a number of unique characteristics in authors' citation evaluations, such as article-level context effects and rule- versus affective-based judgments. The paper suggests possible implications for developing retrieval algorithms based on automatically predicted importance of cited references.\n",
"role": "assistant"
}
] |
How can family winegrowing businesses be sustained across generations?
|
[
"Qualitative",
"Knowledge sharing",
"Entrepreneurship",
"Leadership",
"Conceptual/theoretical",
"New Zealand",
"Family business succession",
"Winegrowing"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Family businesses feature prominently in both \"Old World\" and \"New World\" wine countries (Orth et al., 2007; Remaud and Couderc, 2006). Moreover, some of the oldest surviving family businesses across all industries are winegrowing businesses (O'Hara, 2004), while younger wineries often gain a competitive advantage by accentuating their heritage and promoting themselves as a family business (Hoffman et al., 2006; Strickland et al., 2013). Although there is variance across culture, geography and government policies, family winegrowing businesses share similar challenges to other family businesses including conflict, succession or growth (Evert et al., 2016). While each wine region differs, family winegrowing businesses share similar intergenerational challenges that are common across the world of wine. Importantly, the wine industry provides a rich context for studying family businesses (Bresciani et al., 2016; Gallucci et al., 2015; Jaskiewicz et al., 2015).\nLiterature review: Family business research has advanced considerably over the past three decades (Litz et al., 2011), providing a rich platform for future research, including the issue of continuity through the family bloodline (Evert et al., 2016). We draw on Chua et al. (1999, p. 25) seminal definition of family business:\nResearch methodology: For this investigation, we engaged a case study approach to generate theory (Eisenhardt and Graebner, 2007; Gioia and Pitre, 1990). Case study data from three family businesses were aggregated into dimensions empirically underpinned by themes and open-coded concepts (Boyatzis, 1998; Miles and Huberman, 1994). Through this process, we strive to understand \"actions, events, processes, objects and actors\", where \"novel and unexpected understandings\" emerge from patterns in empirical material that are linked to a broader theoretical frame (Nordqvist et al., 2009, p. 298).\nFindings: Winegrowing has a high emotional connectedness to the land (terroir) and to the craft of winemaking (Black and Ulin, 2013). Consistent across the three families was the intention for the family winegrowing businesses to continue in the family. Notably, different key characteristics of the knowledge base possessed by each generation were identified as a central feature in the findings. As outlined in Figure 1 and respectively discussed in detail below, the data analysis resulted three aggregated dimensions - knowledge sharing, entrepreneurial characteristics and leadership attributes. Each dimension is underpinned by themes considered a distinctive collection of findings relevant to family winegrowing businesses.\nDiscussion and conclusions: This study contributes to increasing understanding of how family winegrowing businesses are sustained across generations through three dimensions: knowledge sharing, entrepreneurial characteristics and leadership attributes. Interaction between the three dimensions was revealed through similarities and patterns between antecedent themes and categorised data (Figure 1). Entrepreneurship, knowledge and leadership all have common threads that contribute to a firm's competitive advantage (Barney, 1991). Specifically, knowledge can be complementary to the entrepreneurial characteristics through the vision for the business (Cabrera-Suarez et al., 2001; Grant, 1996; Nonaka and von Krogh, 2009). Sharing knowledge in a bidirectional manner - from the senior generation to the next generation and vice versa - resonated as a central activity for family winegrowing businesses. In this way, knowledge sharing could represent a strong competitive advantage over non-family firms. If the diverse knowledge between generations is captured and well managed, there is potential for new ventures or strategic renewal activities to take place (Phan et al., 2009).\n",
"role": "user"
},
{
"content": "Purpose: This paper aims to investigate how family winegrowing businesses can be sustained across generations.\nDesign/methodology/approach: The authors engaged a multi-level case study approach. In total, 27 semi-structured interviews were conducted with three winegrowing firms in New Zealand. All family members (both senior and next generation) employed in each business were interviewed alongside non-family employees.\nFindings: Three key dimensions - knowledge sharing, entrepreneurial characteristics and leadership attributes - were identified that can support successful successions in family winegrowing businesses.\nOriginality/value: The authors have generated a theory that enables academicians and practitioners to understand how family winegrowing businesses can be successfully sustained across generations. The authors argue that knowledge is a central feature in family firms where previous research combines knowledge with entrepreneurial orientation or the resources and capabilities of a firm.\n",
"role": "assistant"
}
] |
Job mismatch in early career of graduates under post-communism
|
[
"Returns to higher education",
"Graduate labour force",
"Transition from university to labour market",
"Job mismatch",
"Over-education",
"Post-communist transformation",
"HEGESCO survey"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Due to the seminal work by Becker (1975) and Mincer (1974), analysing investments into human capital and the returns to these investments are one of the most frequent research issues in the economic literature. There is a growing interest in the topic among sociologists, too, who originally defined the \"returns\" in terms of occupational prestige, satisfaction with work or chances to avoid unemployment. Empirically, large-scale comparative studies on school-to-work transition can be quoted on the topic (Shavit and Muller, 1998) (Muller and Gangl, 2003; Wolbers, 2003, 2007). Going beyond the sociological approach of credentialism, more recent studies tend to discuss education - job mismatch. This fits well into the agenda of \"returns to education\" and seems to be a challenge for both economists and sociologist. In fact, there is a large variety of terminology appearing in the literature from over-education to underemployment or to similar further \"labels\" like diploma inflation, performance gap or credential gap between education and jobs (Freeman, 1976; Tsang and Levin, 1985; Livingstone, 1998).\nResearch issues and conceptual background in the light of the country context: In order to frame vertical and horizontal mismatch during the early career of graduates in four new EU-member states, the present section is divided into two parts. First, the problem of education - job mismatch is discussed at the individual (micro) level. Second, two mechanisms are described influencing the link between the school system and the labour market at the macro level with special focus on the countries investigated.\nData, measures, analytical design: This paper utilises the survey data from the HEGESCO project (www.hegesco.org). The sampling frame of the data collection (2008/2009) was the diploma holders who completed their studies five years earlier in 2002/2003. The project was a follow-up of the previous REFLEX project and used the same questionnaire. As for the number of observations, it is 1,533 in Hungary, 1,009 in Lithuania, 1,200 in Poland and 2,923 in Slovenia. The file is weighted to 2,000 cases in order to give an equal weight to the data in each country. The weighting coefficient is used also to correct for over- or under-representation of certain levels or fields of higher education compared to population figures[2].\nPredicting vertical and horizontal mismatch: The impact of demographics\nSummary and discussion of the findings: The aim of this paper was to investigate education - job mismatch among young graduates under post-communism. Despite of the massive expansion at the tertiary level of schooling as well as of the economic restructuring under post-communism making this research a highly relevant issue for these countries, relatively few previous studies on the topic could have been quoted. Two dependent variables, vertical and horizontal mismatch were defined, partly in an objective manner, on the grounds of the ISCO job title, and partly in a subjective manner, based on graduates' subjective assessment about how much the job fits to their field of study. The analysis focused on the determinants of mismatch and did not deal with its consequences. This next section intends to interpret the findings from the perspective of policy implications and ends with the limitations of the above analysis.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to investigate vertical and horizontal mismatch between education and current occupation for graduates in four post-communist societies: Hungary, Poland, Lithuania and Slovenia. In this way it contributes to the field by exploring how mechanisms, known from previous studies on western societies, affect job mismatch in emerging market economies.\nDesign/methodology/approach: Two dependent variables are constructed: working in a non-graduate occupation as defined by the ISCO job title depicts vertical mismatch; assessment of the job from the perspective of the fields of study describes horizontal mismatch. Since the dependent variables are dichotomous ones, binary logistic regression models are fitted to the data predicting the incidence of mismatch. Explanatory variables cover mechanisms affecting job mismatch: variation by fields of studies, accumulated work experience during studies, labour market uncertainties during early career, trade off between job safety and job mismatch, persistence of \"bad\" labour market entry during early career, influence of parental background on school-to-work transition.\nFindings: The analysis reveals significant differences for study fields in association with occupational specificity of the disciplines. Only study-related work experience seems to be advantageous to find a matching job. Labour market uncertainties increase the probability of job mismatch. Job safety is more important than a matching job.\nOriginality/value: Mismatch in first occupation has strong and long-lasting effect on the job match even five years after the graduation. The effect of parental background on job mismatch is curvilinear.\n",
"role": "assistant"
}
] |
Universal banks and stock-market reaction: Some evidence from major announcements
|
[
"Banks",
"Insurance",
"Divestment",
"Financial institutions",
"Conglomerates",
"Equity management"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Over the last decade(s), one of the trends that has really changed the way banks and insurance companies traditionally operate is the bank-insurance interface. Apart from the demutualization of insurance firms, one observes joint ventures where a bank and an insurer establish a third company in which they both have interest. Essentially, they aim to provide banking and insurance products through a common distribution channel and/or the same client base. In France, they call it bancassurance, in Germany Allfinanz. Whatever you call this trend, its appearance has been a significant competitive development in the retail financial services sector. Many banking institutions and insurance firms have found such restructuring to be an attractive and profitable complement to their core businesses. The successes of bancassurance strategies, however, have been far from uniform across financial intermediaries and geographical regions. van den Berghe and Verweire (2001) examine the potential advantages and disadvantages of financial convergence at the consumer, market, and macro level, while providing general recommendations from a managerial as well as a regulatory point of view.The promise of being able to leverage the profitability of large customer bases and extensive distribution networks has attracted the attention of the retail financial services sector and encouraged more and more organizations to diversify outside their traditional competitive domains. There are probably more failures than successes in the bank-insurance interface, but this critically depends on the region examined. Fields et al. (2007) provide evidence showing positive bidder wealth effects, which are related to economies of scale/scope and to the location of the bidder/target. On the other hand, Carow (2001a) hints a word of warning, as insurance values are plummeting on account of the court ruling to allow banks to sell annuities and insurance products[1]. Analogous results are also reported by Hendershott et al. (2002) and Neale and Peterson (2005). Some of the most important elements of the bancassurance's uneven success are regulatory burdens, poor implementation, economic environment, or short-term management strategies. However, one may go beyond these conventional \"impediments\" and include lack of a sales culture within the bank, poor product promotions, substandard manpower management, failure to integrate marketing plans, lack of interest by the branch management, improper database expertise, poor sales channel linkages, inadequate incentives, resistance to change, and even negative attitudes toward insurance.The phenomenon has attracted the attention of academics and practitioners alike. Molyneux et al. (1997) and Genetay and Molyneux (1998) provide an excellent overview of bancassurance in Europe, and document its historical roots dating back to the nineteenth century. A number of researchers have examined the existence of diversification benefits and efficiency gains within financial conglomerates (Boyd et al., 1993; Gande et al., 1997, 1999; Saunders, 1994; Vander Vennet, 2002; Nurullah and Staikouras, 2008). Others have argued that the convergence would enhance the static and dynamic efficiency of financial institutions, without increasing the risks to financial stability (Saunders and Walter, 1994). Recently, Staikouras (2006b) and Artikis et al. (2008) identify the key challenges and obstacles that financial conglomerates face in attracting new businesses from less traditional channels, as well as the underlying market dynamics and incentives to bank-insurance ventures.The examination of stock market movements has been central in the finance literature, with event study methodologies being employed for financial conglomerates. Thus, the present work concentrates on the impact that bancassurance announcements as well as insurance sell-offs have on the stock returns of these institutions. To this end the paper identifies a number of major corporate restructuring cases and examines how shares reacted to each pertinent announcement. The analysis is also extended to a cross-sectional framework to identify the overall trend of the equity market around the announcement of these corporate events. The corporate standing of these organizations, the size of their proposed deals, as well as their dominance in international markets would help us draw some conclusions and hopefully trigger more research.The rest of the paper is organized as follows. The next section presents the data and briefly discusses the methodology employed. \"Empirical results\" section performs all relevant tests and discusses the empirical findings. The last section concludes the paper.\nData and methodology: The current work is based on a sample of international financial intermediaries that were involved in important business restructuring events[2]. These universal institutions have directly or indirectly, via their strategies, changed the landscape of financial intermediation and caught the attention of financial press. The cases examined are grouped into three broad categories as described in more detail in Appendix 1, Table AI. In particular, the sample comprises bank-insurance ventures, de-conglomeration deals, as well as mergers and/or acquisitions[3] in the broad area of financial institutions. The latter calls for the company taking control to be either a bank or insurer, but necessitates that merging with or acquiring the particular target will not produce a bank-insurance firm. The data sample is based on daily stock prices, which are then transformed into their respective return series.In order to evaluate the impact of the above-mentioned corporate announcements on the bidding companies' equity prices, the paper employs an event study methodology (Brown and Warner, 1985). Similar to Delong (2001), among others, the market model is used to calculate the abnormal returns (AR) upon and around the announcements of the various deals. However, taking into account the well-known interest rate sensitivity of financial intermediaries (Staikouras, 2003, 2006a); we extend the market model to incorporate the unanticipated changes in short-term government securities - where statistically appropriate. The estimation of the model parameters is based on equity returns from day -240 to day -41, relative to the announcement date (day zero). Similar results are obtained when a longer estimation period (-260 to -61) is employed. The estimated coefficients are then used to calculate the expected and excess returns for each stock.To facilitate the evaluation of the market's reaction throughout the event period (-X<=0 <=+X), cumulative abnormal returns (CAR) are calculated as follows: Equation 1 To test the significance of AR, under the null hypothesis that they are equal to zero, the procedure described in Brown and Warner (1985) is followed. The test statistic is the ratio of stock's i excess return, on day t, to its estimated standard deviation (s); with the latter being the standard deviation of excess returns from the estimation period. This follows a t-distribution and is formulated as: Equation 2 The statistical significance of the cumulative excess returns is given by: Equation 3 where d stands for the total number of days for which AR are accumulated. In order to investigate the impact of these announcements on a cross-sectional framework, the average AR (AAR) and the cumulative AAR (CAAR) are also calculated by: Equation 4 Equation 5 In both cases the t-statistics are calculated in the same way as in equations (2) and (3), respectively, only in the latter case the average values are being used. It is worth clarifying that calculating t-values, the denominator is the standard deviation of the AAR, and not the average standard deviation of the companies' AR.\nEmpirical results: The empirical investigation is based on realized market returns achieved by equity-holders of financial intermediaries. The focus is mainly on bank-insurance ventures, some of which have met reasonable success, while others decided to divest their insurance branch(es). In essence, the analysis aims to explore the response of stock market participants to the formation of these financial conglomerates and/or sell-offs of their insurance business. Excess returns are estimated over different time periods surrounding the particular announcement date (day zero). An interesting survey by Carow and Kane (2002) reports event-study evidence of relaxing the long-standing restrictions on US financial intermediaries.The different event windows aim to examine the stock market reaction from three angles. First, the analysis concentrates on whether possible information leakage has resulted in AR prior to the announcement date and thus various intervals until day zero are explored. Second, based on the aforesaid event intervals, symmetric cumulative AR (same number of days before and after day zero) are also examined for comparison purposes. Third, intervals that include a period prior to the particular event as well as one trading day after day zero are also considered. The last two time intervals aim to explore the speed of price adjustment to these corporate announcements, and provide an insight into the efficiency of the market. The first sets of empirical findings for the bank-insurance deals are reported in Table I.These six cases of corporate restructuring have expanded at a cross-industry level. They involve global financial institutions of considerable size, with the last two deals focusing on new markets such as Greece. The merger between Citicorp and Travelers Group is the one that stands out due to its high excess returns across a lengthy time frame. With that move, Citigroup not only affected its own future, but the future of peer institutions as well as the Federal policy per se. Market participants realized considerable positive excess returns on their stocks, even 20 days prior to the announcement and until five days after the event. This mirrors the long-expected demolition of historical walls separating the banking, insurance, and brokerage industries. Before the congressional passage of the Gramm-Leach-Bliley Act (1999), which repealed the Glass-Steagall Act (1933) and the Bank Holding Company Act (1956), considerable boundaries existed to the affiliation of banks with other financial institutions[4].Although the empirical findings are from the bidder's perspective, in the Greek market we focus on target firms because:* Bank-insurance is expanding rapidly in this particular market - now under a de jure implementation.* The bank-insurance phenomenon existed for many years in a de facto mode (Kalotychou and Staikouras, 2007).* The big international bidders' equity returns are not significantly affected in this particular case.This methodological diversion does not alter the overall empirical findings. Since the early 1990s, the market has developed considerably and many multinational banks and insurance firms have established their subsidiaries there (Artikis et al., 2008). Emporiki and Piraeus are two of the major banks in Greece. In both cases, no major responses are reported, apart from some significant AR just around the event period. A plausible explanation is that the market was used to such interface, at least in a de facto mode, and the deals came as a natural consequence of the financial services modernization process.Credit Suisse had two corporate announcements, but in both cases the market reaction was considerably more modest compared to that of Citigroup. However, it is evident that around the strategic alliance announcement period (-2,+4) the stocks experienced positive AR. A close look at the above figures unveils that the excess returns were reaped just after day zero, implying that the market could not have been aware of the forthcoming deal. This can be seen by comparing the insignificant excess returns until day zero (-X, 0) with the significant returns obtained from the symmetrical windows (-X,+X) or the ones considering an extra day after the announcement (-X,+1). Interestingly, no AR were obtained on the announcement day. When Credit Suisse merged with Winterthur Insurance, the results were less impressive in terms of statistical significance. The only positive returns appeared three days prior to the event. Significant negative AR were realized on day zero.Allianz and Dresdner realized significant positive returns only a couple of days before the event date. The purchase of Dresdner emerged out of a confused period in which German bankers, busy reorganizing a relaxed corporate network that had outlived its usefulness, often settled for second-choice solutions. The latter comes in contrast to the European norm found in Italy, France, and Benelux countries, where banks sell a lion's share of insurance. Nowadays, the universal banking model is quite common and accepted in Germany. Similar results are reported for the Bank One and Zurich acquisition. By buying the US life and annuity business from Zurich Financial Group, Bank One became both a manufacturer and seller of its own branded term life, fixed and variable annuities, 403(b) plans[5] and other insurance products. The purchase doubled its insurance assets under management to $11 billion. The moderate stock market reaction could be attributed to the fact that it was unusual, at that time, for a US bank to become an insurance underwriter, with most banks choosing to sell an insurer's products.Sell-offs in the bank-insurance interface\nConcluding discussion: Banks and insurers across Europe and worldwide have rushed into mergers with one another, hoping to reap economies of scale and boost revenues by cross-selling. However, the phenomenon has not been welcomed in all geographic regions, and cross-selling has turned out to be more difficult than expected. The current study aims to examine some popular corporate restructurings in the financial services industry, and draw an overall conclusion based on the fact that these cases have attracted considerable public interest.Three groups of corporate deals are examined, namely bank-insurance announcements, insurance divestments, and mergers/acquisitions within the financial services industry in general. The econometric investigation shows that financial markets approach bank-insurance ventures in a more favorable manner. Although, the latter is not always characterized by statistically significant excess returns, their equity performance, in all of the cases examined, is in line with the overall economic climate under the period of investigation. Sell-offs of insurance branches are mainly disapproved of by investors, but at the same time the divestment act itself falls within the banking \"short-term high-growth\" attitude. In a similar vein, horizontal restructuring among financial services firms - excluding insurers - shows even more negative equity performances.From the insurers' point of view, bancassurance offers the prospect of reduced distribution costs compared to the traditional agent channel. Insurers are also strategically aiming to seize the opportunity of reaching new customers not easily accessed by their agent network. At the same time, bankers see their profit margins being squeezed, and additional fee-income sources emerge as a solution to stabilize profitability and increase the range of their financial services. It is therefore theoretically justified for the bank-insurance trend to emerge, without ignoring the pertinent obstacles and opportunities (Staikouras, 2006b) that can threaten/sustain its existence. It is almost certain that changes in global markets will force customers, new and existing, into becoming more insurance conscious and understanding the risk and benefits of bancassurance. The \"revolutionary\" shift in the insurance industry is caused by many complementary structural changes taking place both at an institutional and global market level (van den Berghe, 1998).When it comes to the choice of insurance services, life products feature more prominently in such hybrid corporate structures. The basis behind this is the complementary nature of life insurance and bank products, both of which are geared towards asset accumulation and management by financial institutions. Owing to this reason, it is easier for the banks to sell life products than non-life services. However, research should be geared towards non-life insurance provisions where bancassurers can successfully engulf, such as residential fire policies tied up with mortgage loans, and/or motor policies associated with car loans. Moreover, the motives behind managers' decisions are worth exploring especially when they consciously decided to invest and subsequently sell a particular business. A few other fertile research fields are asset management, pensions/saving provisions, the non-existent regulatory frame, and the capital that bankers/insurers should have. Thus, it would be worth looking at these issues as a number of intuitive thoughts can be put forward, which could enhance our understanding and judgment about the bancassurance phenomenon.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to investigate the interface between the banking and insurance sectors. Using capital market data, the paper aims to discover any significant equity returns around the announcement date of these bank-insurance interfaces.\nDesign/methodology/approach: - The analysis employs an event study methodology to evaluate the equity performance of these institutions. The empirical findings are based on well-known financial intermediaries taken from an international sample.\nFindings: - The magnitude and sign of equity returns appear to differ among the cases examined. Some firms exhibit considerable abnormal returns, while others remain passive to any corporate restructuring revelation, or even undrape stock market losses. In many cases, the latter is associated with the overall economic environment, and/or with investments that are not compatible with the general banking philosophy of \"fast growth within short-term horizons.\" Based on equity returns, the bank-insurance interface seems to be the most preferable business restructuring; while insurance divestments and horizontal mergers, among financial intermediaries, do not perform as profitably as expected.\nOriginality/value: - The paper will be of value to those interested in capital markets with emphasis on universal banking and insurance. It is suitable for academics as well as practitioners.\n",
"role": "assistant"
}
] |
Challenges to economic integration and social inclusion of Syrian refugees in Turkey
|
[
"Social inclusion",
"Economic integration",
"Refugees",
"Host communities",
"Forced displacement"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: More than 11m Syrians have been forced from their homes, including more than 6m internally displaced persons and about five million refugees since the start of the civil war in 2011 (United Nations High Commissioner for Refugees, 2018). With more than 3.3m Syrian refugees registered in Turkey, the majority of who are women and children, the country is now hosting the largest group of refugees in the world (UNHCR, 2019). Unlike most signatories to the 1951 Geneva Convention Turkey only grants refugee status to citizens from countries within the Council of Europe. Therefore, lacking an appropriate legal framework for the economic and social inclusion of Syrian refugees, the Turkish government enacted a Law on Foreigners and International Protection and a new \"temporary regulation\" in 2014 (Baban et al., 2016; Battjes, 2006; Disaster and Emergency Management Authority, 2014a). Refugees with \"temporary protection\" status have access to the public education system, healthcare services and other forms of social assistance in Turkey. As of 2016, persons covered by \"temporary protection\" status are eligible for work permits secured by employers. Despite a shift in government policy toward Syrian refugees, and without an overarching rights-based approach that includes the participation of all stakeholders, enhancing the livelihood security of refugees and vulnerable members of host communities remains bleak.\nContext: According to the Disaster and Emergency Management Authority less than 10 percent of Syrian refugees live in one of the 21 state-run camps located mostly in the Southeast of Turkey contiguous to the border with Syria. The majority of the Syrian refugees in this region live in large urban or semi-urban areas, many of which have high levels of unemployment. For instance, in Mardin-Batman-Sirnak-Siirt provinces the unemployment rate hovers around 30 and 28 percent in Sanliurfa-Diyarbakir significantly higher than the national unemployment rate of 10 percent (Gursel and Koksa, 2017; Organization for Economic Cooperation and Development). The participation of Syrian refugees in the informal labor market has put downward pressure on wages and working conditions and displaced low-skilled workers, especially women and youth. At the same time, Syrian refugees including Syrian-owned businesses have invigorated communities which previously experienced economic decline and have contributed to growth in the country (International Crisis Group, 2018).\nLabor market in Turkey: The informal economy and labor market in Turkey were already dominant, prior to the humanitarian crisis, especially in low value-added sectors such as agriculture, construction and manufacturing (Del Carpio and Wagner, 2015). In fact, the informal economy represents approximately 29 percent of the gross domestic product and 35 percent of the labor market is informal or undocumented by the government (Ceritoglu et al., 2017; Icduygu, 2016). Skills mismatch with market needs and labor market rigidity exaggerates the informal labor market (Gursel and Koksa, 2017). Moreover, some reports indicate that the presence of a large number of low-skilled Syrian refugees has further impacted the Turkish labor market with a decrease in wages, expansion of exploitative working environments, an increase in child labor and displacement of low-skilled Turkish and Kurdish workers, particularly women and youth from host communities (Yalcin, 2016; Heyse, 2016). Without the availability of household surveys targeted at the Syrian refugee diaspora in Turkey, it is difficult to determine the skills set, education level, employment history and precise contribution of refugees to the national economy. Furthermore, in the absence of sound labor market studies that determine the anticipated skills required to contribute to the growth of the Turkish economy in the next few decades, attempts to weave Syrian refugees into the socioeconomic fabric of the country are likely to be met with only limited success.\nWork permit regime: In a move to integrate Syrian refugees into the formal labor market, the Turkish government reformed labor legislation in 2016, to allow holders of the \"temporary protection\" status to apply for work permits six months after their date of registration. Having a work permit brings with it considerable benefits. For example, work permit holders are eligible for private health insurance, a pension, workers' compensation and are able to sue for non-payment of wages. Despite reform of the labor legislation significant barriers to the formal labor market remain. Employers who want to employ an individual with \"temporary protection\" must prove that they are unable to hire a Turkish citizen who can perform the same job and adhere to the employment quota unless they are awarded an exemption. Only 10 percent of the total employees in a workplace are able to have the \"temporary protection\" status (Kutlu, 2015). Employers looking to employ an individual with \"temporary protection\" must submit and pay for all work permit application fees and are thereafter required to pay at least the national minimum wage, health insurance and social security and file tax reports for the respective individual. Employers submit work permit applications through the online e-government services portal (www.turkiye.gov.tr).\nAccess to the formal labor market: Proficiency in the Turkish language is not required in many sectors, but it expands employment opportunities in the formal labor market and empowers workers to understand and to demand their rights. The ability to comprehend health and safety measures is particularly important for workers in the agriculture, construction and manufacturing sectors. Complicating the access to language education and as the space for civil society diminished, the government prohibited nongovernmental organizations from offering Turkish-language lessons unless they either have approval from the government or employ government certified trainers in 2017. Municipalities with large Syrian communities offer Turkish lessons, however, classes are often offered during the day which is problematic for those working long hours or daytime jobs.\nThe role of the private sector: Local chambers of commerce and industry in southeastern Turkey have performed exceptionally by devising education and employment generation strategies with sister Syrian private sector associations and other organizations, some of which have received financial support from the European Union Regional Trust Fund. However, aside from the Union of Chambers and Commodity Exchanges of Turkey (TOBB) the national private sector has largely been absent from the discussion on the economic integration and social inclusion of Syrian refugees (Karasapan, 2016). In many respects, the Turkish Industry and Business Association (TUSIAD) and the Turkish Enterprise and Business Confederation (TURKONFED) drive the agenda on inclusive business practices, good governance, corporate social responsibility and the role of the private sector in support of democracy in Turkey. TUSIAD and TURKONFED could complement efforts by the International Labor Organization, International Organization for Migration, United Nations Development Programme, the Turkish employment agency (ISKUR), TOBB, Syrian Economic Forum, Syrian Business Association, Spark, Building Markets, Habitat and Idema to improve employability and expand career development of Syrian refugees and members of host communities. In a study of Syrian-owned small and medium-sized companies (SMEs) in Turkey, Building Markets, highlights the positive impact the Syrian private sector has had on local and regional economies where significant numbers of Syrian refugees reside. According to the study entitled \"Another Side to the Story,\" more than 6,000 Syrian-owned companies have registered in Turkey since 2011 and on average employ nine people. Collectively Syrian-owned SMEs have invested over three hundred million dollars in the country (Building Markets, 2016).\nConclusion: Unlike other frontline states to the war in Syria there is a relatively straightforward avenue to formal employment for holders of the \"temporary protection\" status in Turkey. However, without addressing the barriers to economic integration and social inclusion of refugees the Syrian diaspora will likely remain in a state of protracted vulnerability. The prevalence of the informal labor market, the extant hurdles to the formal labor market, the necessity of many refugees to adopt negative coping mechanisms such as accepting unfair wages or sending children to work instead of school in order to supplement family income and discrimination toward refugees are all serious socioeconomic issues that require increased cooperation and better communication among local, national and international stakeholders. The pursuit of a rights-based approach, one that considers local dynamics and political sensitivities would also go some way toward greater acceptance of refugees by members of host communities.\n",
"role": "user"
},
{
"content": "Purpose: The following question drove this research: Would the pursuit of a rights-based approach, one that considers local dynamics and political sensitivities result in greater economic integration and social inclusion of Syrian refugees in Turkey? The paper aims to discuss this issue.\nDesign/methodology/approach: This piece draws on independent research the author conducted in Turkey and other frontline states to the war in Syria from 2016 to 2018.\nFindings: Despite a shift in government policy toward Syrian refugees, without an overarching rights-based approach that includes the participation of all stakeholders and considers local dynamics and political sensitivities, enhancing the livelihood security of Syrian refugees and vulnerable members of host communities remains bleak in Turkey.\nOriginality/value: This original paper closely examines the Government of Turkey's response to the humanitarian crisis that was precipitated by the armed conflict in Syria. The paper also examines the socioeconomic dynamics and increased tensions between the Syrian refugee and host communities.\n",
"role": "assistant"
}
] |
Reflecting on the role of universities in the fight against corruption
|
[
"Universities",
"Corruption",
"Extortion",
"Bribery",
"United Nations Principles for Responsible Management Education (PRME)"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Corruption is a complex, multilayered phenomenon. Thus, it is hardly surprising that no universal consensus has yet been reached on how to define it (Gordon and Miyake, 2001). Critical objections to a globally uniform definition of the term indicate that corruption is a phenomenon that differs in appearance, extent and moral acceptance from culture to culture. Exponents of this view emphasize that the current understanding of corruption is strongly influenced by Western culture and worldview (Velasquez, 2004; Nuijten and Anders, 2007). To counter this objection, efforts are being made to achieve a more balanced understanding of this global phenomenon (UNDP, 2008). Transparency International (2008) defines corruption generally as the abuse of entrusted power for undue private gain. In this context, actions regarded as corrupt are linked to the view on what power is, where power comes from, and what should be done with it (Collins et al., 2009). Since power is typically associated with the governing of a state, corruption is primarily regarded as a concept related to public policy. In this context, two levels of corrupt behavior are often distinguished: grand corruption, otherwise known as political corruption and petty corruption. Grand corruption is associated with illicit behavior involving the application of undue political influence at high levels of government. This form of corruption typically involves significant monetary and/or nonmonetary values and is associated with far-reaching consequences for substantial parts of or the entire society. Usually, these grand monetary and/or nonmonetary advantages are offered to manipulate public policies and institutions as well as political processes (Pacini et al., 2002). Conversely, petty corruption relates to illicit behavior in everyday life, often lying at the interface between lower- to middle-level government bodies and private actors (UNDP, 2008; Elliott, 1997). Monetary and/or nonmonetary values are comparatively low and primarily affect the individual parties involved. However, critics of this distinction emphasize that in practice, there is no clear distinction between the two levels. Rather, the transitions are fluid, and petty corruption also has considerable effects on society, as it prevents effective and efficient administrative action and undermines the legitimacy of public institutions.\nResearch: Following Principle 4 of the PRME, scholars at HTW Chur engage in empirical and conceptual research to advance the understanding of corruption and foster the possibilities to fight it. To accomplish this goal, multiple yet interconnected interdisciplinary research projects have been conducted at HTW Chur.\nPartnership: Following Principle 5 of the PRME, HTW Chur engages in partnerships with actors from the public and private sectors and academia to strengthen and unite in the fight against corruption. As a university of applied sciences, HTW Chur actively searches for partners that can help the university not only work on issues from a theoretical perspective but also develop scientifically sound practical solutions. Partnerships are essential because they help to staunchly focus academic work on the needs of business practices in both the public and private sectors. These partnerships are seen as one of the core strengths and an integral part of HTW Chur's organizational DNA.\nDialogue: Following Principle 6 of the PRME, dialogue has been a vital instrument in achieving what HTW Chur has thus far accomplished in the field of anticorruption. The university embraces the chance to incorporate its scholars' knowledge with that of others and learn from collective experiences. One way that HTW Chur accomplishes this goal is through organizing and participating in events with institutions in the public and private sectors, including the Swiss State Secretariat for Economic Affairs (SECO), chambers of commerce, trade associations, and the UN Global Compact local networks. During these dialogue events, scholars have the opportunity to develop knowledge through giving and listening to speeches in a reciprocal manner. In past events, scholars from HTW Chur presented the results of their various research projects, enabling their audience to gain an understanding of and appreciation for the contributions of HTW Chur to anticorruption efforts. Likewise, the academics learn from others' experiences through listening to talks by other institutions. In this mutual learning process, all parties benefit from the shared experience and information.\n",
"role": "user"
},
{
"content": "Purpose: Corruption continues to ravage societies around the world. The fight against corruption can be fruitful only if approached from multiple standpoints. Thus, corruption must also be approached from an academic and educational perspective. The purpose of this paper is to provide a good practice example of how universities and business schools can take actions to align themselves with the international sustainability and anticorruption agenda.\nDesign/methodology/approach: The six principles of the United Nations Principles for Responsible Management Education (PRME) provide a framework for higher education institutions to address corruption-related issues. This paper presents the case story of the Swiss-based University of Applied Sciences HTW Chur, which developed an academic working agenda on corruption-related topics based on the principles of the PRME.\nFindings: The case story shares the actions that HTW Chur has taken and the benefits that have resulted from the university's work. The findings show that to address corruption-related issues, scholars from the university took actions related to four principles in the PRME: method, research, partnership and dialogue. Furthermore, the results indicate that in addition to the university itself, public and private institutions have also profited from the actions taken.\nResearch limitations/implications: This paper is founded on a single case story; thus, the usual limitations of this research design apply.\nPractical implications: It becomes apparent that the needs of the private sector in the fight against corruption could be addressed by engaging in and strengthening partnerships with universities. Thus, it seems beneficial to develop guidelines and standards to facilitate collaborations and dialogue in a participatory and transparent way.\nOriginality/value: The paper provides a good practice example of how universities can take actions to align themselves with the international sustainability and anticorruption agenda.\n",
"role": "assistant"
}
] |
Are Ghanaian MFIs' performance associated with corporate governance?
|
[
"Corporate governance",
"Performance management",
"Financial performance",
"Ghana",
"Banks"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: This paper attempts to explain the lackluster performance of rural and community banks (RCBs) in Ghana from the point of view of their state of corporate governance. RCBs belong to the broad class of financial institutions referred to as microfinance institutions (MFIs). They are special purpose financial institutions established to facilitate access to institutional credit and banking services by people living in rural areas. This, it is expected, will help improve the socio-economic well-being of the majority of Ghanaians, who live in rural areas and are mostly engaged in farming, fishing, trading and cottage industries with little access to commercial banks.The concept of RCB was operationalized in Ghana in 1976. RCBs in Ghana are community-owned and community-run unit banks. That is, they operate as banks but are not allowed to operate fully fledged branches in other communities. The key functions of RCBs are (Bank of Ghana, 2006) to:* Mobilize savings in the rural communities and channel them into the provision of credit to rural dwellers.* Monetize the rural communities by way of inculcating in rural dwellers the banking habit.* Serve as instruments for the development of business in the rural communities and thereby facilitate rapid growth of the national economy.A review of deposit mobilization, lending activities and total assets of RCBs and deposit money banks (DMBs), which include commercial banks, merchant banks, development banks and universal banks, is shown in Table I. The table indicates that in recent times, 2000-2006, the proportion of total deposits mobilized by RCBs averaged only 7 percent; 4 percent for loans; and 6 percent for total assets. Table I further suggests that RCBs' proportion is not growing. This is in spite of the fact that the number of RCBs currently far exceeds the number of DMBs, (122 versus 23). Meanwhile, RCBs are supposed to be serving as much as 70 percent of the population.Obviously, mobilizing funds and granting loans at such low levels will not do much towards improving the socio-economic situation of the rural folk. Bashin and Akpalu (2001) have expressed concern about the low level of provision of credit facilities to micro-enterprises by MFIs in Ghana. They concluded that \"MFIs have much more to do to raise their scope.\"Further, a number of studies have suggested that there exists a large unsatisfied credit demand in Ghana, especially among locally owned firms. These include Thomi and Yankson (1985), who investigated in-depth 933 enterprises located in 36 small and medium scale towns; Steel and Webster (1992) who surveyed 31 large and 83 small-scale enterprises; and Aryeetey et al. (1994) also surveyed another group of small and medium scale enterprises.Yet another issue of concern is the fact that a number of RCBs have failed overtime. Table II shows the number of RCBs that have been licensed over time, grouped in five-year intervals, and their cumulative totals. Clearly, the total has grown with time. Attention is however drawn to the fact that in spite of the steady increase in the cumulative totals, 23 RCBs have been closed down for not doing well, reducing the number in business as at June 2007 to 122 out of a total of 145 that had been licensed.Reporting on the causes of the failure of RCBs in Ghana, an internal central bank report, Bank of Ghana (2001), identified the causes of failure as, management incompetence; fraud and embezzlement; negligent or ineffective boards of directors; improper accounting records; non-compliance with regulations in granting insiders credit; persistent operational losses; poor loan recovery and corruption, poor deposit mobilization; use of unqualified staff; non-submission of prudential returns; no earning assets and non-performing credit portfolios.It is interesting that this list of causes of failure does not include the capital-asset ratio, which stands at 10 percent for all banks including RCBs, higher than the Basel Capital Accord recommends. Rather, it would appear failure is due to non-functioning of systems and structures that were put in place to govern the running of the RCBs. The broader literature puts the primary cause of failure of rural credit agencies during the 1960s and 1970s as \"lack of institutional viability\" (see for example, Gonzalez-Vega, 1994). Two concepts arise from this conclusion - institutional sustainability (being in position to successfully provide financial services to the poor on an on-going basis) and financial self-sufficiency (having the funds to do so). The second is a necessary condition for the first.This study seeks to analyze rigorously the performance of Ghanaian RCBs and to determine whether the systems and structures, which govern their operations promote good performance. In particular, we are interested in establishing whether there is an association between the financial performance of Ghanaian RCBs on one hand, and their state of corporate governance, extent of outreach to target communities, dependence on subsidies and use of modern technology on the other. Indeed, the Organisation of Economic Development (2004a) has concluded that best practice economic techniques point to corporate governance as an important determinant of firm performance. We expect that our findings will help us advocate for reforms that may result in RCBs playing a more active role in the lives of the people and companies they serve and thus become even more relevant in the economic development of Ghana.Following this introduction, we review pertinent literature, discuss our methodology, present our results, and then make recommendations to help improve the performance of RCBs.\n2. Conceptual framework: We start by discussing the conceptual framework underpinning our investigation.2.1 Corporate governance\n3. Methodology: Our approach makes minimal assumptions about the distribution of variables. The essential assumption is an application of the central limit theorem - that the distribution of the variables computed for the 30 randomly chosen RCBs approaches the normal distribution.We proceed as follows:* compute dimensions of financial performance for a sample of 30 RCBs;* categorize the RCBs into four groups based on their standing according to an overall indicator of financial performance;* classify RCBs into four categories based on their standing according to an overall measure of the state of corporate governance, outreach, use of subsidies, and use of ICT; then* determine whether RCBs' categories based on corporate governance plus is a predictor of RCBs category when its financial performance is assessed. It is a chi-squared goodness of fit test.3.1 Dimensions of financial performance\n4. Results: 4.1 Financial performance\n5. Conclusion and recommendations: We conclude that given the prevailing environment, there is no association between the state of corporate governance, outreach, use of subsidy and information and communication among Ghanaian rural and community banks and their financial performance.We recommend that the laws and codes of conduct that have been recently designed to guide the conduct of business are in the right spirit and should be allowed to work. The restriction on individual ownership of RCBs to 30 percent should be relaxed and other means of regulating RCBs should be sought. Finally, owners of RCB should note the critical importance of developing the competencies of their boards and management.\n",
"role": "user"
},
{
"content": "Purpose: - The purpose of this paper is to determine whether in Ghana, corporate governance, outreach to clients, reduced dependence on subsidies and use of modern technology (together called corporate governance plus) are associated with the performance of rural and community banks (RCBs), which are microfinance institutions (MFI), in the context of newly adopted codes of conduct and regulations, ownership rules and quality of management.\nDesign/methodology/approach: - A total of 30 randomly sampled RCBs were categorized into four groups based on analysis of several dimensions of financial performance. Next, RCBs were again categorized into four groups based on their corporate governance plus. A chi-squared test of independence between the two groupings was performed.\nFindings: - The authors found no association between RCBs' categories based on corporate governance plus and their categories based on financial performance.\nPractical implication: - To enhance performance, corporate governance plus must impact financial performance as documented by OECD. Laws and codes of conduct recently designed to guide the conduct of business should be allowed to work. The restriction on individual ownership of RCBs to 30 percent should be relaxed. And RCBs should pay attention to developing the competencies of their boards and senior management.\nOriginality/value: - This is the first formal test of the association between state of corporate governance plus and financial performance of microfinance institutions in Ghana.\n",
"role": "assistant"
}
] |
Economic performance and efficiency determinants of crop-producing farms in Norway
|
[
"Benchmarking",
"Cost function",
"Farm management",
"Farm performance"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The United Nation has predicted that current world population of 7.6bn is expected to reach 8.6bn in 2030, 9.8bn in 2050 and 11.2bn in 2100 (United Nations, 2017). Finding a way to meet the growing demand for food represents a major challenge for farmers, policymakers and agricultural researchers. These global challenges also affect Norway. Increasing the quantity and quality of food in response to growing food demand requires improvement in the performance of the agricultural sector. Agricultural production growth in Norway is a topic of continuing interest to researchers and policymakers who aim to improve the economic efficiency and economic sustainability of primary agriculture. Good farm management and agronomic practices, in combination with efficient input use, are the best ways to improve farm productivity. In this context, agronomic practices are steps farms incorporate into their farm management to produce crop output. These steps include land preparation, time of sowing, crop rotation, use of new crop varieties, pest control, etc. While some practices may be designed to increase output, others may be directed to reducing labor time or costs.\n2. The nature of agricultural production in Norway: Norwegian farms are usually small and family-operated. Only 3.3 percent of the total Norwegian land area is farmland (Statistics Norway, 2013). Owing to the topography of the country, fields are often small, scattered and difficult to cultivate, which contributes to the high costs of agricultural production. With a relatively long winter in most parts of the country and a short growing season (five months on average), growing fodder, mainly grass, has the comparative advantage in most parts of the country. On the other hand, long summer days, with sufficient rainfall, are beneficial for crop production. Moreover, the cool climate limits the spread of pests and diseases (Steinshamn et al., 2016).\n3. Approaches to measuring farm performance: There are two main benchmarking methods in the literature to measuring the performance of farms: a parametric (econometric) approach, such as a stochastic frontier analysis (SFA); and a non-parametric approach, such as data envelopment analysis (DEA). In both methods, the basis for performance measurement is the radial contraction/expansion connecting inefficient observed points representing individual firms with reference points on the efficient frontier (There are other approaches too, for instance, Bayesian stochastic frontier (SF) (Koop and Steel, 2001), semi-parametric (Simar and Wilson, 2007) and stochastic DEA (Huang and Li, 2001), but these are not commonly used in empirical studies).\n4. Empirical model specification: Consistent with the farm efficiency literature (e.g. Christensen and Greene, 1976), we estimated a transcendental logarithmic (translog) cost function incorporating Hicks-neutral technology change. We used panel data but, to simplify the notation, we have dropped the subscripts i and t, where i would denote firm i = 1, ..., n and t would denote time, t = 1, ..., t. Our specification of the cost function C, including the prices of inputs (wj)j=1, ..., J, outputs (yk) k =1, ..., K, and a local wage distribution index (rm) m=1, ..., M, is:\n5. Data: Our data source is the Norwegian Farm Accountancy Survey collected by the Norwegian Institute of Bioeconomy Research (NIBIO). The survey participants are selected from a list of farmers, randomly drawn from the register of grants kept by the Norwegian Agricultural Agency. The data include production and economic data collected annually by NIBIO from about 1,000 farms in all regions of Norway. The number of participants varies from year to year. Approximately 10 percent of the surveyed farms are replaced per year to incorporate changes in the population of farms in Norway. Participation in the survey is voluntary. There is no limit on the number of years a farm is included in the study. Some of the farmers have participated in more than 20 years, and others have started participating for the first time. To accommodate panel features in estimation, we included only those farms for which at least three consecutive years of data are available.\n6. Estimation results and discussion: 6.1 Estimation procedures and hypothesis tests\n7. Conclusion and implication of the study: The aim of this study was to investigate whether farm management and socioeconomic factors contributed to improving the performance of crop farms in Norway. We found that farm resources were widely used sub-optimally, i.e., there are farms that produced lower outputs from the inputs they used or used more inputs to produce the same output, compared to the best performing farms.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to explore the economic performance of Norwegian crop farms using a stochastic frontier analysis.\nDesign/methodology/approach: The analysis was based on a translog cost function and unbalanced farm-level panel data for 1991-2013 from 455 Norwegian farms specialized in crop production in eastern and central regions of Norway.\nFindings: The results of the analysis show that the mean efficiency was about 78-81 percent. Farm management practices and socioeconomic factors were shown to significantly affect the economic performance of Norwegian crop farms.\nResearch limitations/implications: Farmers are getting different types of support from the government and the study does not account for the different effects of different kinds of subsidy on cost efficiency. Different subsidies might have different effects on farm performance. To get more informative and useful results, it would be necessary to repeat the analysis with less aggregated data on subsidy payments.\nPractical implications: One implication for farmers (and their advisers) is that many of them are less efficient than the estimated benchmark (best performing farms). Thus, those lagging behind the best performing farms need to look at the way they are operating and to seek out ways to save costs or increase crop production. Perhaps there are things for lagging farmers to learn from their more productive farming neighbors. For instance, those farmers not practicing crop rotation might be well advised to try that practice.\nSocial implications: For both taxpayers and consumers, one implication is that the contributions they pay that go to subsidize farmers appear to bring some benefits in terms of more efficient production that, in turn, increase the supply of some foods so possibly making food prices more affordable.\nOriginality/value: Unlike previous performance studies in the literature, the authors estimated farm-level economic performance accounting for the contribution of both an important farm management practice and selected socioeconomic factors. Good farm management practices, captured through crop rotation, land tenure, government support and off-farm activities were found to have made a positive and statistically significant contribution to reducing the cost of production on crop-producing farms in the Central and Eastern regions of Norway.\n",
"role": "assistant"
}
] |
Innovation systems dynamics and the positioning of Europe. A review and critique of recent Foresight studies
|
[
"Europe",
"Foresight",
"Innovation systems",
"Governance of science",
"Global public goods"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: The purpose of this paper is to shed some light on innovation systems dynamics and the positioning of Europe in a longer term perspective, this being done through a secondary analysis and interpretation of Foresight studies. This way of addressing the issue is relevant on two grounds:\n1. Reviewing and interpreting Foresight studies: the argument: We need first to define foresight studies to provide the perimeter of selection and the argument of interpretation.\n2. Reviewing and interpreting Foresight studies: the methodology: 2.1 Issues addressed, actors and sponsors: a typology of Foresight activities\n3. A review and interpretation of selected Foresight studies along seven dimensions: 3.1 The innovation race: competitiveness and influence through innovation\n4. Discussion and conclusion: a critique of Foresight practices: Our argument is that a review of Foresight activities can be interpreted in terms of the vision of the world held by the actors - here the leading European actors, which are the sponsors and participants involved in most of the Foresight processes analysed here. How does this argument stand in view of the seven dimensions of results we have identified?\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to shed light on innovation systems dynamics and the positioning of Europe in a longer term perspective, with special attention to the international governance of the major challenges which humanity is facing.\nDesign/methodology/approach: The method used is based on a secondary analysis and interpretation of Foresight studies. The underlying assumption is that Foresight exercises can be considered as the scene where techno-economic systems and trajectories are proposed, discussed and shaped.\nFindings: Foresights can be distinguished by the challenges and issues they focus on: - the innovation race: competitiveness and influence through innovation, - the thematic concerns: the stakes of global public goods, - the normative perspectives: a new style of development in the making. The question to know whether Europe can be a world driver in fostering responsible innovation models and cooperative modes of knowledge circulation and global challenges handling through relevant social and technological innovations.\nOriginality/value: This paper reviews and puts in perspective Foresight exercises in an original way, this on two grounds: first, it includes exercises fostered at national, regional (in particular the EU) and global (UN system) levels - this to account for the emerging multi-level governance and, second, it classifies them according to their major focus, namely the innovation race, the thematic concerns (global public goods) and the normative perspectives (new style of development). It provides to the actors of innovation in both the public and private sector an understanding of the current key-concerns and visions on innovation systems and the position of Europe.\n",
"role": "assistant"
}
] |
Product importance and complexity as determinants of adaptation processes in business relationships
|
[
"Industrial marketing",
"Mining industry",
"Adaptation processes",
"Capital equipment",
"Product complexity",
"Product importance"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\n1. Introduction: The essence of business is exchange, early defined by Bagozzi (1975, p. 36) as \"the transfer of a product or service for money but also comprises social, psychological or other intangible entities\". In B2B markets many types of exchanges take place between buyers and sellers. Different types of products are exchanged with various degrees of complexity. These exchanges occur commonly within the framework of buyer-seller relationships. Often, buyers are reluctant to change their sources of supply and a degree of stability and durability in their dealings with many of their suppliers is present (cf., Turnbull et al., 1996, p. 44).\n2. Theoretical framework: Drawing upon the theoretical framework set forth by the interaction and network approach (INA), we address the three research questions of this study building on previous research conducted regarding the adaptation processes and the task factors that affect the buyer-seller relationships.\n3. Methodology: 3.1 Case study design\n4. Case studies: review and discussion: Two organizational case studies are briefly described. Three organizations are involved in the cases, i.e. the business relationships under investigation. These will be referred as B1, B2 and S1; respectively two buyers and one seller of capital equipment for use in mining operations. The case descriptions will follow the theoretical framework previously outlined, i.e.: presentation of the market and situational antecedents, and discussion of the adaptations taking place in the relationships.\n5. Findings and conclusions: In any research project, choices made initially in the theoretical frame of reference, create limitations in interpreting the results. Hence, the scope of the research is delimited by the dimensions and factors we specified and evaluated. Hereafter, first, we present main results regarding: market and situational antecedents, and adaptation processes. Second, we return to the posed research questions and draw relevant considerations with regard to them.\n6. Contribution and study's limitations: Our intention has been to characterize the adaptation processes and understand the role of market and situational antecedents as potential drivers of adaptive behavior. This research has contributed to the field of interaction and network approach (INA) since it maps activities and events entailed in the adaptation processes, particularly, in an industry where no such studies were found. The clear identification of dimensions that drive adaptive behavior is also a theoretical contribution, e.g. the corroboration of product importance, complexity and market concentration and resource dependency as drivers of adaptations and development of long-term relationships. Furthermore, other market and situational antecedents are identified as key drivers to the development of long-term business relationships: the supplier's brand name and the choice of a direct channel strategy. Moreover, two-task related factors were extremely relevant as selection criteria for capital equipment: the functional suitability and the degree of standardization/customization of the equipment.\n7. Managerial implications: The challenge for mining companies in a resurgent market is to make right choices during the financial crisis so they are able to take advantage of the upside potential of the industry. Capital equipment decisions are core for these companies' profitability. Thus, the nature and development of the relationships between buyers and sellers of capital equipment is extremely important.\n",
"role": "user"
},
{
"content": "Purpose: The aim of this paper is to characterize adaptation processes in business relationships. The nature of adaptive behavior is described by outlining activities and events in these relationships. The role of perceived product importance and complexity in the character of the adaptations processes is sought.\nDesign/methodology/approach: A case study approach is adopted and two long-term relationships between buyers and sellers of capital equipment in the mining industry are investigated. Perspectives from both sides of the dyad (buyer and seller) were attained through in-depth interviews.\nFindings: Findings show that supplier-based adaptations occur more frequently than customer-based adaptations. The market antecedents of concentration and resource dependency are identified as drivers of adaptive behavior. Furthermore, product importance and complexity are key drivers to adaptation processes and the development of long-term relationships. Supplier's brand name and the choice of a direct channel strategy are identified as indicators of long-term commitment to the market. Moreover, two-task related factors were extremely relevant as selection criteria for capital equipment: the functional suitability and the degree of standardization/customization of the equipment.\nResearch limitations/implications: The findings are specific to the market environment and recommendations are given for the realm of the mining industry. Multi-case studies in multi-contexts should be conducted to enable generalization and potential theory-building.\nPractical implications: A number of important managerial implications for buyers and sellers of capital equipment in the mining industry are given.\nOriginality/value: This paper contributes to knowledge by providing rich descriptions of adaptation processes. This real life evidence enables the identification of major drivers of adaptive behavior and, consequently, the development of long-term successful relationships.\n",
"role": "assistant"
}
] |
Do hostels play a role in pleasing Millennial travellers? The Portuguese case
|
[
"Portugal",
"Hospitality",
"Consumer trends",
"Millennials",
"Consumer behaviour",
"Hostels"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Millennials, also known as Generation Y, are considered to be the ones born between 1980s and 2000s. They differ from previous generations (X and Baby Boomers) by their lifestyle and behaviour, and are disrupting many social and economic dimensions. Millennials are redefining social, political, demographic, technological, and economic fabric worldwide and increasing presence and clout in the marketplace and workplace (Garikapati et al., 2016). Considering this scenario, it is not surprising that considerable attention is being paid to Millennials' priorities, lifestyle preferences, technology adoption, activity, housing choices and, of course, travel behaviour.\n2. Millennials' lifestyle, consumption and travel behaviour: With much being discussed about delimitations of generational cohort, we opt for Brosdahl and Carpenter's (2011) definition that delineates Millennials as those born between 1981 and 1999. Millennials are different from previous generations (Generation X and Baby Boomers), but why? They are the most educated generation so far - in terms of formal education (McDonald, 2015). They were born and raised in the internet era and, compared to their parents and grandparents, they exhibit a lag in some lifecycle activities.\n3. The hostel \"boom\": Millennials represent 70 per cent of total hostel travellers, where 15 per cent of them used a hostel in the last 12 months. Apart from this, hostel customers consider travelling an important part of their lifestyle and are consequently more likely to go abroad, when compared to other leisure travellers (Hostelworld Group, 2016a). Hostel travellers tend to spend important part of their budget on travel, where in absolute terms, they spend approximately EUR1,100 more than other travellers annually. Millennials that stay at hostels tend to spend an average of EUR1.522 during their latest main trip (WYSE TC Report, 2015).\n4. The Portuguese case: In Portugal, the first hostel was officially inaugurated in 1993, in Lisbon, and it was associated with Youth Hostelling International. It was only in 2005 that the first private hostel (Lisbon Lounge Hostel) appeared. According to Andrade (2014), it was from the increase of those private, independent hostels that the Portuguese hostelling market gained new force.\n5. Methodology: Methodological approach was based on two complementary qualitative methods: ethnographic participant observation and online ethnography (\"netnography\"). Ethnographic approach was chosen for being the method that allows exploring the nature of social or cultural phenomena, rather than aspiring to test hypotheses, which was used in previous studies that explored backpackers' and travellers' behaviour (as in Sorensen, 2003; Galani-Moutafi, 2000). Traditionally, ethnography provides tools to understand offline dynamics. However, with new online social dynamics new approaches are required. Netnography, also called online ethnography, is ethnography adapted to the study of online communities (Kozinets, 2002; Rageh et al., 2013) and fits the present research in the context of internet working as a marketplace for Millennials, also known as the digitally native generation.\n6. Results: 6.1 What do the best hostels do?\n7. Discussion and practical implications: The results show that all the hostels considered \"the best\" in Portugal offer more than just a bed for a low fare. Instead, they involve guests from the beginning until the end of their stay. Each hostel analysed has its own characteristics and particularities (Table II), but they align their services with guests' profiles, in order to provide the best experience during their stay. While at a party hostel, guests may find a pub route (pub crawl); in a traditional traveller house, they can rest from their journey while listening to a jazzy sound and enjoying a wine tasting. It was identified that the hostels in focus provide a superior experience because, once there, the Millennials are offered the standards of a good hotel (in terms of cleanliness, facilities, etc.) but with a communal atmosphere and a set of activities and conveniences that the new travellers require.\n8. Conclusion: In terms of travel, Millennials are not synchronised with previous generations due to different lifestyles, attitudes and preferences. They put travel as one of life priorities and are willing to spend significant part of their budget to explore the world, meet new people and search for local experiences. But, for as long as they have been increasing their importance on the tourism market, for many hospitality businesses, they seem to be the target far to be reached. However, for hostels, which are the youth lodging by essence, this is an already accomplished goal.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to identify the best hostel practices and trends, and to evaluate and discuss how they are keeping up with the present demand created by the Millennials market.\nDesign/methodology/approach: Methodological approach is based on two complementary qualitative methods: ethnographic participant observation and netnography. Portuguese hostels are the target of this study, as they are considered some of the best in the world. First, a fieldwork was conducted in three top-rated hostels in Lisbon, aiming to immerse in hostel's environment. Then, 300 online reviews were analysed to identify what customers value the most in those services. Data interpretation was done through a content analysis crossing supply-demand perspectives.\nFindings: Results indicate that, while staying in a hostel, the Millennials want to: meet people, engage in activities and events, have local experiences, feel comfortable/at home, get good value for money and enjoy the convenience. In return, hostels play an important role in hospitality industry by pleasing the Millennials, by offering: central location, an interactive atmosphere, functional facilities, well-planned design and decoration, a set of activities and events and personal treatment.\nOriginality/value: This paper is one of the few to describe Millennials' travel behaviour towards hostels' services.\n",
"role": "assistant"
}
] |
Thematic mapping of cloud computing based on a systematic review: a tertiary study
|
[
"Governance",
"Cloud computing",
"Systematic reviews",
"Thematic synthesis",
"Tertiary study"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Cloud computing is considered one of the most important advances in the history of computing, resulting in a significant shift in how IT services are developed and deployed (Marston et al., 2011). Cloud computing has attracted the attention of many academic and business communities (Bayramusta and Nasir, 2016) and, as a result, many types of research and reports have been published indicating various directions for the future. One popular kind of study among cloud researchers is the systematic review (SR), used to explore various cloud computing issues. SRs are generally used to understand, assess and synthesize the results of all existing studies in relation to a specific research domain (Keele, 2007). Contrary to the traditional and conventional literature review, SRs are conducted using a well-defined and step-by-step research methodology (Kitchenham et al., 2010).\n2. Cloud computing and tertiary studies: Cloud computing refers to the use of remote servers over the internet to store and process data (Castelino et al., 2013; Alshamaila et al., 2013). Cloud computing is a way to make computing resources more flexible in which increase the computing capacity or to add capability on the fly (Knorr and Gruman, 2008). Typical benefits are related to the outsourcing of the control, maintenance of the cloud, flexible access to capacity, which could result in potential cost-savings and better cost control (Janssen and Joha, 2011). Often contracts are made by cloud computing users and providers. Cloud computing payment can be based on the actual capacity used or subscription based. Apart from contractual governance also relationship governance between the user and provider is a key aspect (Loukis et al., 2019).\n3. Research methodology: The previous section presented an overview of the most important aspects of conducting systematic research in fields related to cloud computing (information systems, software engineering). As a tertiary study, this article adopts a research methodology framework (adapted from (Keele, 2007; Fink, 2013; Bandara et al., 2011; Levy and Ellis, 2006; Cruzes and Dyba, 2011a) which uses a three-layered iterative approach, including:\n4. Findings: This section describes the results of this study based on the questions determined in subsection 3.1. The number of review studies included, published between 2011 and 2016 is shown in Figure 2. Out of a total of 142 identified articles, 94 were initially included (see Appendix 4)[5], of which 76 met the final criteria after evaluating the quality of included review papers based on the AMSTAR assessment checklist. The annual trend in publications shows the increasing interest of researchers in conducting SRs in the field of cloud computing. In the following, the results of quality evaluation of SRs related to the first phase of the study are provided in Section 4.1, while in Section 4.2, the analysis of the thematic synthesis or coding process were presented using the Nvivo software reports.\n5. Thematic maps of cloud computing themes: In this section, we discuss the main results of the review studies based on the three themes of cloud computing identified: technology, business and governance. For each theme, the mostly related concepts and topics, as well as the relationship between them, are described on the basis of the content coded in each review paper. These concepts, topics and the relationships between them are displayed in a conceptual model (thematic map) using a UML class diagram model for each theme.\n6. Discussion: An overview of recent research on cloud computing in recent years has raised the need for a thorough review of the research. Accordingly, as a tertiary study, this research attempted to evaluate and analyses the review articles based on their methodological approach and contextual content. In the methodological evaluation, we found that the mean quality scores of the SRs was acceptable and improved over time. Most of the research (85 percent) have not used professional software tools, which we recommend for future SRs. In addition, this study revealed that the following guides and other references will also help researchers improve the quality of their review studies.\n7. Conclusion and future work: This tertiary study brings clarity in the literature landscape, as SR studies performed in the cloud computing field, demonstrate miscellaneous outcomes and utilize different approaches. The most important newness of this study include identifying the research themes, concepts and topics addressed in cloud SRs and presenting their relationships using thematic maps. The three main cloud computing themes are technology, business and governance. By integrating the thematic synthesis and SR methods, we developed an iterative three-layered research framework for conducting such tertiary studies. We also evaluated the methodological quality of SRs in the field of cloud computing and suggested some solutions to improve it.\n",
"role": "user"
},
{
"content": "Purpose: A large number of systematic reviews (SRs) studies have been performed in the cloud computing field, demonstrating miscellaneous outcomes and utilizing different approaches. Accordingly, a meta-review of cloud SRs is needed to appraise the results of such studies and create an integrated understanding. The paper aims to discuss these issues.\nDesign/methodology/approach: A tertiary study was conducted using a systematic method to analyze SRs including two stages: searching and screening the SRs and thematic synthesis of results. As a qualitative data management tool, Nvivo software was used to support the research process, for data coding and synthesis.\nFindings: First, by searching electronic sources between the year of 2011-2016, out of a total of 142 identified articles, 94 articles were included according to pre-determined criteria, of which 76 articles were approved after qualitative evaluation. In the second stage, identifying the research themes, a map of the concepts and issues related to each theme was drawn up. The analysis shows that the quality of articles has improved but can be further enhanced using methodological guidelines as well as supporting tools. The research has focused more on the technical aspect, although there is an equal demand for synthesizing of cloud governance concepts.\nOriginality/value: This is the first tertiary study which presents the main research themes and concepts of cloud SRs in form of thematic maps by using the thematic synthesis and SR methods. This paper also provides some recommendations to improve reviews after evaluating the quality of papers. This study can support reviewers for future SRs in the field and also helps practitioners and managers to have a better understanding of different aspects of cloud computing.\n",
"role": "assistant"
}
] |
Learning climate and customer-oriented behaviors: the mediation of customer knowledge
|
[
"Customer knowledge",
"Knowledge management",
"Organizational learning",
"Customer-oriented behaviours",
"Learning climate"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Knowledge is one of the key strategic resources that can produce a sustained competitive advantage (Grant, 1996). In service organizations characterized by frequent contact between a salesperson and the customer and a high intensity of customer knowledge mobilized in the underlying operations, salespeople need to have rich customer knowledge about customer characteristics and various strategies for meeting the diverse needs of different customers (Bettencourt et al., 2001; Mention and Bontis, 2013). Such salespeople's customer knowledge is often tacit and tends to be highly local or organizationally specific (Choi and Lee, 2002). As tacit knowledge is obtained through imitation and practice, the essential dilemma for service organizations is how to manage tacit knowledge in such a way that it contributes to organizational performance (Pham and Swierczek, 2006).\nLiterature review and hypothesis development: Learning climate\nMaterial and methods: Sample and procedure\nResults: Descriptive statistics and correlation analysis\nDiscussion and implications: Discussion\nLimitations and directions for further research: Despite the contributions that this study has made to the field, it also faced several limitations. First, the study took only a snapshot of salespeople and customers and was unable to follow these individuals over a longer period of time. Considering that employees' perceptions of learning climate are not static and customers often change their attitudes and behaviors based on their usage experience or product knowledge, a cross-sectional research design does not offer nearly the same insights into the dynamics of interactions between salespeople and customers as a longitudinal design would. As such, a longitudinal design would offer greater insights into this issue in the future.\nConclusion: By integrating the perspective of KM with organizational learning, this research empirically tested the effect of learning climate on salespeople's customer knowledge and customer-oriented behaviors. Using salespeople-customer nested survey data on a broad cross-section of retail banks in Taiwan, we provided empirical support for the conjectures that learning climate helps banks improve salespeople's customer knowledge and that customer knowledge positively increase salespeople's customer-oriented behaviors.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to evaluate the effect of learning climate on customer-oriented behaviors by incorporating salespeople's customer knowledge in the banks. It also explores the mediating role of customer knowledge between learning climate and customer-oriented behaviors.\nDesign/methodology/approach: A conceptual two-level model that links learning climate to customer-oriented behaviors was developed and tested using data collected from salespeople and customers in banks in Taiwan. Data from 444 customers involving 92 salespeople was collected via a questionnaire and analyzed using hierarchical linear modeling.\nFindings: The results show that learning climate encourages salespeople to develop customer knowledge and customer-oriented behaviors, and that salespeople's customer knowledge positively influences customer-oriented behaviors. This study also provides empirical support for the hypotheses that learning climate helps salespeople increase customer-oriented behaviors through improving salespeople's customer knowledge.\nResearch limitations/implications: The findings highlight the importance of enhancing learning climate and customer knowledge to enable banks to improve salespeople's customer-oriented behaviors. This research also points to customer knowledge as mediating mechanisms that can explain the association between learning climate and customer-oriented behaviors in the sales context.\nOriginality/value: By integrating knowledge management with organizational learning research, this study evaluates the effect of learning climate on salespeople's customer-oriented behaviors by incorporating their customer knowledge. In addition, the present study points to salespeople's customer knowledge as one of several mediating mechanisms that explains the association between learning climate and customer-oriented behaviors.\n",
"role": "assistant"
}
] |
Quebec confronting its future: some tools for public decision making
|
[
"Forecasting",
"Decision making",
"Public sector organizations",
"Strategic planning",
"Canada"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Since its foundation, Quebec had to concern itself with its own future. New France, colonised by S. de Champlain in 1608, but practically left to its own devices by France, was handed over to Great Britain in 1763. The following century was a struggle to ensure the economic viability and demographic survival of the French-speaking community. A century after Canada was given dominion status, in 1867, Quebec still remains divided as to the question of its constitutional future, and in 1995, a referendum on independence saw the \"nos\" win the day with only 50.5 per cent of votes.\n1. Strategic planning in widespread use: To improve governance and the effectiveness of public policies, both in settling current problems and in taking account of those that will arise in the longer term, the Quebec authorities have tried to instil more strategic reasoning into the decision-making process.A planning process was introduced in 1996 for the whole of State administration. It is run by the Secretariat aux priorites et aux projets strategiques (Priorities and Strategic Projects Secretariat) in the Ministere du Conseil executif (the Premier's department - www.mce.gouv.qc.ca/ministere/ministere.htm), set up to enhance the Cabinet's efforts regarding planning and co-ordination. This strategic planning includes three stages:1. To start with, the government explains its vision and its priorities for action for the coming years.2. The second stage takes place under the ministries' responsibility. Each one has to draft its strategic plan. These plans are generally worked out for three to four years. Most of the ministries have an \"assistant deputy minister\" (equivalent of a Central Offices director) in charge of planning. In accordance with the public administration act passed in 2000, each ministry presents its strategic plan, which includes its mission, the context in which it evolves, strategic directions, objectives, main lines of intervention and performance indicators. Besides the ministries, most of the public agencies have strategic planning. For example, the \"Surete\" du Quebec (police force) is currently applying its strategic plan for 2004-2007.3. Finally, all the ministries' strategic plans are presen-ted to the Secretariat aux priorites et aux projets strategiques, which checks their coherence, and to the Secretariat du Conseil du Tresor (Treasury Board Secretariat), which assesses their pertinence with regard to means required. The minister in charge then submits these plans to the National Assembly. Implementation of the plans is indirectly monitored by three ministerial coordination committees. These committees have sector-based skills: Social development; Economic prosperity and Sustainable development; Citizenship and Culture.This strategic planning is closely related to budget planning. Here the Conseil du Tresor (Treasury Council) steps in, also alongside the Cabinet. The Conseil du Tresor prepares the spending budget for ministries and agencies which is put to the National Assembly. This budget is supplemented by the annual expenditure control plans that ministries and agencies have a legal obligation to prepare. In addition to this annual plan there is a yearly action plan, which is not required by law. This action plan is to materialise the vision conveyed by the strategic plan. Above all, strategic planning is based on management by results: each ministry and agency has to prepare an annual management report that notably includes the results obtained in comparison with the goals set by both the strategic and the annual expenditure control plans.The Cabinet and the Conseil du Tresor are the mainstays of this strategic and budget planning system and each of them is supported by a secretariat whose analyses and proposals are to help ministries and agencies throughout their process with the aim of complying with the requirements of the law. For example, the Secretariat aux priorites et aux projets strategiques helps them identify measurable objectives and results indicators.\n2. Strategic intelligence is gaining ground: In 1999, the government of Quebec (www.gouv.qc.ca) set up the Reseau de veille integree sur les politiques publiques, an integrated intelligence network for public policies coordinated by the Secretariat aux priorites et aux projets strategiques. As its name says, it is first and foremost a strategic intelligence system, i.e. a compendium of up-to-date information. But it is also concerned with identifying in time the changes that are emerging and the trends that may shape the future of public policies, comparing the situation in Quebec with what is being done elsewhere in public policies and visions of society, proposing a shared view of future-bearing events and collapses, keeping decision-makers informed of the challenges Quebec society has to face up to and introducing better dynamics of exchange and consultation between the concerned parties.This network brings together some 30 ministries and agencies, as well as five of the largest towns in Quebec. As many \"coordinating watchers\", spread over each ministry and agency, run a network of nearly 500 \"watchmen and women\" who, in their own institution and in addition to their everyday task, help with spotting the relevant information for the themes dealt with by the Network. Network members are linked to one another via intranet, which is deemed to be user-friendly and efficient.The Network works on \"sectorial targets\" defined by the ministries, and also on six \"Network targets\":1. The effects of globalisation on the State's role and functions (sovereignty, interdependence and public policy management).2. Social cohesion (see Box 1).3. Continental dynamics (interaction within the American continent and its growing integration, in particular through NAFTA).4. Territorial dynamics (area planning, urban centres and local governance).5. Risk prevention and management (health, techno-logical or natural risks).6. What is called la gouverne (governance, i.e. administrative management of the information revolution but also of parity, addressed from the point of view of public services).Given the recent publication of the document Shine among the Best (Gouvernement du Quebec, 2004), presenting the government's vision and priorities for action, these \"Network targets\" are being reviewed.To disseminate its work on its web site, the Network uses \"clips\", a kind of short alert report about topics, and more meaty summary sheets, comprising a short analysis of the situation. It also produces \"Info-Reseau\", which provides decision-makers with a selection of clips and sheets likely to be of most interest to them. Lastly, the Network periodically publishes the bulletin Prospective. Whereas every member of the Network uses the site, \"Info-Reseau\" and Prospective are more especially aimed at decision makers.It is not easy to assess the impact of this intelligence activity. A positive point is that the Network is fostering the circulation of pertinent, structured information within the administrative system. The number and positioning of \"watchers\" allows the Network to be in touch with all the decision-makers. In addition, the Cabinet is in contact, via the Secretariat aux priorites et aux projets strategiques, with the Integrated Intelligence Network and this makes it possible to link up intelligence and planning.Elsewhere in the administration, other agencies are developing a strategic intelligence function. This is so for the Public Service Commission, an independent monitoring and control organisation that makes sure the general principles of the law relating to the civil service are observed, that can work by forming a tribunal administratif (a one-stop jurisdiction allowing citizens to solve their disputes with the government) and can be consulted by the Conseil du Tresor. The Commission developed its own system of intelligence and futures studies in 2003-2004. Another example is that of the Observatoire de l'administration publique which runs an arena for thought and action about governance, spotting current trends in public administration. This Observatory is dependent on Quebec's \"Ecole nationale d'administration publique\" public service school and publishes a bimonthly newsletter, Vigie (www.enap.uquebec.ca/enap-fra/observatoire/obs-vigie.html), which is a detection and intelligence tool. Part of the Ministry of Economic and Regional Development and Research, the Department of strategic information and futures produces a summary every fortnight using an information watch of 350 internet sites around the world: Reperes RSTI (www.reperes.mrst.gouv.qc.ca/fr/index.php) is a review of the press specialised in research, science, technology and innovation (RSTI).\n3. Futures studies, in the strict sense, are still relatively rare: Futures studies are not greatly developed in Quebec when taken to mean the organisation of knowledge concerning the future of a society in more than five years' time (that is to say a more distant perspective than the one used for planning) and possibly integrating the construction of scenarios.Futures studies are perhaps paying the price of strategic intelligence and planning being so well developed, to an extent where political decision makers, reckoning they are sufficiently enlightened in the short and medium term, do not feel quite so much the need for a longer-term outlook. Another explanation undoubtedly lies in the preference for the short term that dominates Anglo-Saxon culture and in which, despite its singular nature, Quebec evolves.Experience in futures studies in Quebec is therefore restricted to a few exercises, mainly conducted by groups of researchers connected to the universities. A few trail-blazing projects can be mentioned here.At the beginning of the 1970s, at the invitation of the chancellor of the University of Quebec, Alphonse Riverin, a workgroup on futures studies was set up by Pierre-Andre Julien with two other researchers, Pierre Lamonde and Daniel Latouche, at the time lecturer in political science at McGill University in Montreal.In 1975, they published a Methode des scenarios: reflexion sur la demarche et la theorie de la prospective (Scenario Building: Reflection on the Approach and Theory of Futures Studies) (Lamonde and Latouche, 1975). This work helped establish a link between the Anglo-Saxon approach, at the time dominated by the Rand Corporation's methods, and the European approach, more centred on social sciences. In 1976, the same team published a report called: Quebec 2001 - Une societe refroidie (Quebec 2001 - A Dampened Society) (Julien et al., 1976), a rather pessimistic futures study. A total of 6,000 copies of this scenario, presented as being the scenario of the unacceptable, were very quickly snatched up.Today, Daniel Latouche is a professor at the National Institute for Scientific Research where he is in charge of the Research and Future Studies Workgroup, mainly devoted to new urban territories. In 1993, he jointly published Projet Quebec 2000+, une prospective du Quebec a l'aube d'un autre siecle (The Quebec 2000+ Project: a Futures Studies Analysis on Quebec at the Dawn of a New Century) (Latouche, 1993).Institut GamMA has carried out several futures-oriented exercises under the supervision of Kimon Valaskakis, professor of economics at the University of Quebec in Montreal and who was at one time Canadian representative to the OECD. Among the projects run by K. Valaskakis is a report commissioned by the Conseil superieur de la langue francaise (French language council - government of Quebec): Prospective de la langue francaise au Quebec - Un cadre de reflexion strategique (The Future of French Language in Quebec - A Strategic Framework for Reflection) (Valaskakis, 1985). In 1978, Valaskakis and Dansereau had published a Prospective socio-economique du Quebec (Economic and Social Futures Studies of Quebec) for the Office of planning and development in Quebec (Valaskakis and Dansereau, 1978). Four years later, a Quebec futures workgroup, convened by Roland Jouandet-Bernadat, published Le futur du Quebec au conditionne (The Conditional Future of Quebec) (Jouandet-Bernadat, 1982).Territorial futures studies are also present as shown by the work carried out by the Interdisciplinary research workgroup for the East of Quebec about the Gaspesie region.\n4. Promising foresight: the \"Perspectives STS\" project: \"Perspectives STS - Science et Technologie au service de la Societe\"(Science and Technology in the Service of Society) is a project sustained by the Conseil des sciences et de la technologie (CST (www.cst.gouv.qc.ca)), presided over by Helene P. Tremblay. A public organisation in which key figures from every background sit, the CST's mission is to advise the Minister of Economic and Regional Development and Research in order to promote scientific and technological development in Quebec. The CST is particularly trying to ensure that science and technology are more in tune with the concerns of society, notably with regard to their impact and the choices for research priorities.The Perspectives STS project is an aspect of this intent. It proposes interviewing the population in order to identify the major socio-economic challenges of the coming 15 to 20 years in Quebec society and then, with the participation of scientists, to direct research and innovation accordingly. So, unlike other foresights, Perspectives STS takes the analysis of society's demands as a starting point rather than the potential supply proposed by science and technology in the medium or long term.The operation should last three years. It includes two major phases:1. A phase is currently in progress to identify the major socio-economic challenges for Quebec society, over a 15- to 20-year perspective, these are challenges which science and techno-logy can make a significant contribution to (see Box 2).2. As from summer 2005, a phase of analysis and strategic forecasting will take place in which researchers and \"users\" of technologies will work out joint objectives and strategies to meet the challenges identified in the first phase.The steering committee for Phase 1 not only includes CST representatives, but also senior civil servants and academics. The Conseil does not interfere with the content, but leaves it up to the various players involved to determine and negotiate their choices, so the public consultation dimension is present. The project has received the backing of associations representing Quebec's scientific community, heads of research in industry and several ministries, including the Education and Health Ministries.In Quebec, the challenge today probably consists in building a public futures approach that goes beyond the technological sphere alone, and to do so using the strategic planning base and by drawing on information gathered by the Integrated Intelligence Network. The Network appears to be the best place to start up futures exercises integrating the construction of scenarios. But for the time being it does not have the official mandate for this.\n",
"role": "user"
},
{
"content": "Purpose: - This paper seeks to analyse the tools and initiatives developed in the public sector in Quebec to anticipate futures challenges.\nDesign/methodology/approach: - The paper surveys and analyses forward-looking activities within governments, the public sector and institutes in Quebec.\nFindings: - The paper finds that Quebec is taking care of its future, especially through sustainable development policies. Strategic planning is in widespread use, boosted by administration reforms in the 1990s; strategic intelligence is gaining ground through a network gathering the main public institutions; and a promising foresight is going on.\nOriginality/value: - The study provides an insight into the state of futures research in Quebec and its characteristics.\n",
"role": "assistant"
}
] |
Leader humility and constructive voice behavior in China: a dual process model
|
[
"Organizational identification",
"Leader humility",
"Constructive voice",
"Relational identification with the supervisor",
"Speaking out",
"Speaking up"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: In today's constantly changing economy, the commercial context becomes more complicated and dynamic. It is more difficult for a single person to figure it all out independently. Thus, making timely and correct decision and reflection relies more on others' suggestions and feedbacks. Constructive voice refers to \"the voluntary expression of ideas, information, or opinions focused on effecting organizationally functional change to the work context\" (Maynes and Podsakoff, 2014). Consistent with Van Dyne et al.'s (1998, 2003) prosocial voice behavior, constructive voice aims to improve organization's work procedures, methods and collective performance. However, the constructive voice may also incur risks since the voicer may challenge the target's status, and thus cause undesirable interpersonal and career outcomes (Morrison, 2014). Thus, although the front-line employees are enabled to identify potential problems and emerging opportunities in practice, they are reluctant to speak out their critical ideas and suggestions (Morrison, 2011). Specifically, employees in China are more inclined to silence, mainly because the hierarchical system forces employees to be deferential to their supervisors; the Guanxi culture encourages harmonious interpersonal relationship and excludes disagreements (Chen et al., 1998). Hence, how to effectively stimulate Chinese employees' constructive voice behavior is a critical issue in both academia and practice.\nTheory background and hypotheses: Leader humility\nMethods: Sample and procedure\nResults: Preliminary analyses\nDiscussion: Results and analyses\nConclusion: This study sheds light on the underlying process through which leader humility influences followers' speaking-up and speaking-out behaviors. Results indicate that leader humility could encourage employees speaking up by enhancing their relational and organizational identifications, and encourage employees speaking out by enhancing their organizational identifications.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to find how leader humility affects employees' constructive voice behavior toward supervisor (speaking up) and coworkers (speaking out) from an identification-based perspective, and seeks to verify the effectiveness of leader humility in the Chinese context.\nDesign/methodology/approach: Data were collected from 325 employees in four Chinese companies with two phases. In the first phase, the participants were asked to report the leader humility, their identification of their relations with the supervisor, and their identification with their organization. In the second phase, they were asked to report their voice behaviors toward their supervisors and coworkers.\nFindings: The results indicate that leader humility strongly predicts both employees' voice behaviors of speaking up and speaking out. Results further suggest that relational identification with the supervisor explains why leader humility promotes employees speaking up, while organizational identification explains why leader humility promotes employees speaking up and speaking out.\nPractical implications: Managers with humility can successfully shape employees' relational and organizational identifications, which in turn encourage their voice behaviors toward supervisors and coworkers. Hence, behaving humbly in working places could be an effective way for managers to promote organizational cohesion and creativity.\nOriginality/value: Although leader humility attracts much attention in both academia and practice, researchers have been primarily focusing on conceptual development and measurement issues, and empirical studies are rare. This is the first research connecting leader humility and employee proactive behaviors. Moreover, it takes an in-depth analysis of the constructive voice behaviors by differentiating them based on their targets.\n",
"role": "assistant"
}
] |
Triple bottom line model and food safety in organic food and conventional food in affecting perceived value and purchase intentions
|
[
"Corporate social responsibility",
"Perceived value",
"Environmental concern",
"Sustainable management",
"Food safety concern"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: With the development of industry, animal husbandry has increased carbon emission, causing increased environmental pollution and climatic anomalies, which pose a threat to the future of humanity. Therefore, the 42th Convention of the United Nations in 1987 has brought up the idea of \"sustainable development\" (United Nations, 1987). Sustainable development includes three different aspects: economy, environment and society. Sufficient economic benefit is the power of corporate sustainable development. However, exceedingly so, economic development has caused harm to the environment and given rise to social problems.\nLiterature review and hypotheses development: Environmental concerns and CSR\nMethodology: Questionnaire design\nEmpirical results: Structure equation modeling, SEM\nDiscussion: Food safety, environmental protection and CSR have become global issues in recent year. Based on the result of the empirical research, this study theoretical and management proposes the following implication.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to investigate whether there is a significant difference in consumer's attitude and purchase intentions toward organic food and conventional food (non-organic food and non-genetically modified) under the influence of corporate social responsibility (CSR) for environmental protection in the context of global warming and frequent food safety issues.\nDesign/methodology/approach: To understand the triple bottom line(TBL) affect the consumers' attitude and purchase intentions of organic food and conventional food from the consumer's point of view, primary data collected via 363 valid questionnaires tested the conceptual model by structural equation modeling (SEM).\nFindings: The results show that CSR has a significant influence on consumers' perceived value. Moreover, perceived value is an important factor in affecting consumers' attitude and purchase intentions. And consumers' attitude toward environmental concern, food safety concern and CSR also have an indirect effect on purchase intentions.\nOriginality/value: The findings of the study would help corporations not only pursue economic profit but also keep a balance within the environment and the ecosystem, and serve as a reference to corporations to fulfill CSR for the goal of sustainable management.\n",
"role": "assistant"
}
] |
US REITs capital structure determinants and financial economic crisis effects
|
[
"Capital structure",
"REITs",
"Leverage",
"Pecking Order Theory",
"Financial economic crisis",
"Trade-off Theory"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The Trade-off Theory (TOT) and the Pecking Order Theory (POT) are the most relevant theories when explaining the capital structure drivers of firms. Property investment is a capital intensive business where the capital structure has an important role. Several studies have investigated the determinants of capital structure of real estate vehicles based on these traditional theories, but the financial economic crisis (FEC) strongly changed their financing strategies.\n2. Literature review: Previous studies on the capital structure mainly refer to the TOT and POT; each theory is introduced together with its relative applicability to the REITs industry.\n3. Data and methodology: The main purpose of this paper is to discover what are the main capital structure determinants of US REITs. An additional analysis is performed to understand the effects on the capital structure as a consequence of the FEC. Finally, it is empirically identified which, among the TOT and POT, better reflects and explains the capital structure decisions.\n4. Results and findings: Throughout this section, main findings are introduced and analysed. First, in Section 4.1 capital structure determinants influencing US REITs' level of financial leverage are analysed over the ten-year time period ranging from 2005 to 2014. During the first analysis, the effect of most relevant drivers of financing decisions on US REITs level of financial leverage is investigated as well as the direct effect of the FEC on US REITs' capital structure. Second, findings from the first model are compared against expectations from the TOT and the POT in Section 4.2. Finally, in order to deepen the understanding of the effect of the FEC on US REITs' financing decisions, an additional analysis is performed in Section 4.2. Here, the indirect effect of the FEC on US REITs' level of financial leverage is tested. The goal of the analysis is to gain an understanding of whether, and to what extent, the FEC changed relationship incurring among variables, thus indirectly impacting US REITs' financing decisions.\n5. Conclusion: This paper analysed, from different perspectives, the most relevant drivers of US REITs' capital structure and the FEC impact. During the period 2005-2014, the most relevant variables have been OPERATING RISK, TANGIBILITY OF ASSETS, GROWTH OPPORTUNITIES and CRISIS, in order of relevance. While TANGIBILITY OF ASSETS and CRISIS demonstrated a positive relationship with US REITs' level of financial leverage, for OPERATING RISK and GROWTH OPPORTUNITIES variables the sign was negative. The other selected variables, namely SIZE and PROFITABILITY, did not appear to have a strong influence.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to identify the capital structure determinants through an analysis of 74 All-Equity REITs listed in the US market from 2005 to 2014. Furthermore, the paper aims at understanding the impact of the financial economic crisis (FEC) among the identified explanatory variables.\nDesign/methodology/approach: A fixed effect panel regression model is performed based on Trade-off Theory (TOT) and Pecking Order Theory as a starting point to provide expectations on the relationships incurring among the identified variables.\nFindings: First, while tangibility of assets and crisis evidenced a positive relationship with REITs' financial leverage, operating risk and growth opportunities variables displayed a negative relationship. Meanwhile, size and profitability did not appear to influence the capital structure. Second, it appears that the positive effects of tangibility of assets and profitability variables on US REITs' capital structure increased as a consequence of the FEC. Operating risk and growth opportunities variables slightly increased their negative relationship with US REITs' capital structure after the FEC. The TOT prevails when explaining the economic reality underlying US REITs.\nPractical implications: The paper contributes to the understanding of US REITs' financing decisions within the US market. The FEC also had a substantial indirect impact on the financial leverage determinants of US REITs, the latter being nowadays more oriented to maintaining a flexible capital structure.\nOriginality/value: The paper provides a comprehensive view of the medium-term effect of the FEC on US REITs' capital structure.\n",
"role": "assistant"
}
] |
Professional identity in service work: why front-line employees do what they do
|
[
"Service logic",
"Service encounter",
"Service work",
"Service dominant logic",
"Professional identity",
"Service co-creation"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\nIntroduction: Research has shown that the way in which employees understand themselves as professionals (Ibarra, 1999) is reflected in their concrete service work, ultimately being of critical importance with regard to how customers experience value. Hence, value creation does not solely reside in customers' own undertakings, or in co-creation activities. This understanding of professional identity is increasingly being addressed as a key factor in explaining why service employees do as they do when serving their customers. This contrasts with the traditional view, in service research, that the managerial discourse and organisational culture are of key importance when it comes to influencing front-line employees' conduct of their value creating service work jointly with their customers. This somewhat naive notion of managerial power in service organisations is still alive and being nurtured, both in research and in practice. We may all have experience of the power that is inherent in strong professional groups. You do not mess with a doctor, a soldier, or a police officer.\nLiterature review: Professional identity in the context of value co-creation\nMethodology: Given that professional identity is an empirically under-explored field of research, we adopt an exploratory single-case study design (Eisenhardt, 1989; Yin, 2003; Miles and Huberman, 1994). Previous research into professional identity has often been in the form of qualitative case studies, albeit purely relying on interviews or focus groups of individual employees. Our empirical material was taken from a public transport customer centre and consists of a combined set of methods, rather than being purely reliant on interviews. Our multi-perspective approach, consisting of mystery shopper data, a customer survey, observations, personal interviews, and a focus group, enables us to include the customer's view and not just the employee's own view of his/her professional identity. Since the professional identity concept also takes into account the expectations of other people in society (Beijaard et al., 2004), e.g. customers, we wanted to add this perspective in order to gain a more holistic view of the concept. The customer's view of how service workers are supposed to act not only influences the employees' understanding of professional identity but also directly provides salient elements of the inter-subjective phenomenon of professional identity. The customer survey and the mystery shopper data enriched our understanding with this perspective.\nFindings: This section presents six key aspects of professional identity, as seen from both an employee and a customer perspective. Each is described in terms of the individual perceptions and co-creation practices that the informants refer to, illustrated by quotes and behavioural cues. First, we describe the core of professional identity - consisting of the themes, individual resources, cognitive understanding, and interaction know-how - that are crucial to value creation and mirrored in the co-creation of service. Second, we describe the conditions of professional identity - i.e. the service prerequisites partly constituting identity. Third, we present the contour of professional identity, which addresses how identity is partly constituted by service demeanour and functions. In general, all these aspects have been shown to be produced and reproduced by co-creation, indicating that professional identity is a wider concept than we normally consider it to be. In relation to the six main aspects, specific elements are also identified in what follows.\nDiscussion and contribution: The limited amount of research conducted into professional identity to be found in the literature is mainly oriented towards how it is shaped and developed. Most of the research is found to be on teaching. Less deals with how professional identity shapes professional action. If we then turn to the service research literature, a few attempts may also be found. Although these relate it to service and service outcomes, this research is rather shallow (e.g. Akesson and Skalen, 2011). This study extends the view of professional identity in service research by studying how it is constituted in relation to service interactions, something that is of key importance when undertaking service work. This entails going beyond managerial and organisational perspectives and how individual service workers see themselves (cognitively) in terms of being service professionals (Beijaard et al., 2004; Weiner and Torres, 2016).\nConclusions: Understanding customer value is an ongoing issue that has not only been in the spotlight for decades but it is also an issue which has been fuelled, in the service marketing research over the past decade, by the debate concerning the S-D logic paradigm (Vargo and Lusch, 2008; Gronroos, 2011). Few empirical studies have noted, however, the link between the co-creation of service and the service employee's professional identity. In order to address this link, this study has used empirical material dealing with the service workers at a customer centre. Our findings revealed that the core of professional identity is much more than just a cognitive understanding that resides within the individual service worker, or within the managerial discourse. It is not limited to the individual service worker's competencies; rather, it is constituted in the continuous interplay between the cognitive understandings that the employee utilises, in relation to colleagues, service encounters, and individual resources. Furthermore, it also reflects the prerequisites of the service organisation, and the outcome of the service work (i.e. short-term intangible demeanours and long-term intangible functions). As such, we argue that professional identity is a key factor in explaining how value is realized in the service encounter setting.\nPractical implications and suggestions for future research: The understanding and definition of professional identity suggested here adds more nuances to theoretically understanding it. As such, it provides a more in-depth understanding and thus a better foundation for theorizing value co-creation. Value, we argue, is also dependent on the professional identity of the service provider, beyond the customer's own value co-creation and activities at the provider-customer interface. Professional identity adds to the value experience and partly explains why value is realized.\n",
"role": "user"
},
{
"content": "Purpose: The purpose of this paper is to identify the key elements of professional identity in service work in order to provide more in-depth theoretical explanations as to why service workers do as they do while co-creating service.\nDesign/methodology/approach: This study takes a multi-perspective on professional identity, i.e. using both an employee and a customer perspective, arguing that the phenomenon mainly consists of what these interactants jointly do during the service interaction and of the meanings that are attributed to it. The authors draw on a detailed empirical study of professionals working at a customer centre. Methodologically, the study is based on practice theory, which helps us to illuminate and analyse both the micro practices and the meaning attributed to the professional identity of service workers.\nFindings: The key elements of professional identity in service work are outlined within a framework that describes and explains three different facets of the service workers' professional identity, i.e. as a core (i.e. individual resources, cognitive understanding, interaction), as conditions (i.e. service prerequisites), and as contour (i.e. demeanour and functions).\nResearch limitations/implications: The findings are based on an empirical data set from a public transport customer centre. As the results are limited to one context, they do not provide statistical generalizability. Although limited to one service industry, the findings may still be of high relevance to a wide range of service organisations.\nPractical implications: The study shows the significance of managers not just talking about the importance of being service-minded; more exactly, a wide range of service prerequisites, beyond cognitive understanding, needs to be in place. It is crucial that service workers are given the time to develop their contextual knowledge of their customers, and of other parts of the service organisation.\nOriginality/value: This study offers original empirical contributions concerning the key elements of professional identity. An alternative conceptualization of professional identity is provided, through which the paper adds to service research, explaining more specifically what kinds of knowledge and skills are in use during the co-creation of services.\n",
"role": "assistant"
}
] |
Management's disclosure of hedging activity: An empirical investigation of analysts' and investors' reactions
|
[
"Accounting",
"Derivative markets",
"Hedging",
"Experts",
"Disclosure"
] |
[
{
"content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Managers can \"use financing policies to communicate with external investors\" (Palepu et al., 2004). For example, a manager can announce that the company has begun hedging activities to signal that the firm is actively seeking to reduce investors' exposure to transitory shocks that are beyond the manager's control such as interest and foreign currency rate changes. When such exposures are hedged, earnings become a more transparent measure of the results of operations unaffected by transitory shocks.The development of the derivative securities market in the 1980s and 1990s provided managers with several new hedging tools. Financial statement disclosure of derivative activities, which began in the late 1980s, provided an explicit communication of management's hedging activities. In this study, I examine how market participants changed the way they process earnings information after learning of the implementation of hedging activities. I present evidence that the magnitude of analysts' forecast errors decreased, and that unexpected earnings were incorporated into subsequent earnings forecasts to a greater extent. Further, I find an increase in the magnitude of the earnings-return relation. This evidence empirically supports the claim that when a company communicates that hedging activities have been started, market participants are better able to forecast earnings and view subsequent earnings announcements as providing greater information about future earnings.Prior empirical studies measure various attributes of firms that use derivatives relative to firms that do not (e.g. Allayannis and Weston, 2001; Nguyen and Faff, 2002; Guay, 1999; Visvanathan, 1998; Geczy et al., 1997; Mian, 1996; Nance et al., 1993). Guay (1999) provided evidence of firms' motivations to implement derivative strategies by showing that firm risk declined following implementation of a derivative strategy. Allayannis and Weston (2001) find a positive relationship between foreign currency derivative use and firm value. Similarly, Nguyen and Faff (2002) find that managers of Australian companies appear to use derivative strategies designed to increase firm value rather than self-serving strategies. Graham and Rogers (2000) also find evidence suggesting that derivative strategies are undertaken to optimize risk management. Collectively, these findings suggest that firms use derivatives to hedge rather than to increase firm risk via speculation. It follows that, ceteris paribus, most firms announcing hedging activities involving derivative securities are, in fact, hedging.From a theoretical standpoint, DeMarzo and Duffie (1995) show that managers' disclosure of hedging behavior is valuable to the firm because it removes noise in the information set of the investors and makes core value-creating activities more transparent. Palepu et al. (2004) suggest that investors can more easily discern management's performance, as reflected by the core value-creating activities, when transitory shocks are hedged. Results in this study support these theories.In this study, I investigate whether the potential benefits of hedging are reflected in more accurate earnings forecasts and more transparent earnings measures. Given the theoretical underpinnings described above, ceteris paribus, it would be expected that forecast errors would decrease and that observed unexpected earnings more likely reflect the results of firm operations rather than transitory shocks. I compare earnings predictability, forecast revision behavior, and the earnings response coefficients before and after the disclosure of sustained hedging activity. I compare a sample of non-financial firms that first disclosed the use of currency derivatives or interest rate derivatives in the late 1980s to a sample of non-financial firms that did not disclose any derivative activities over a ten-year period. I find a decrease in the size of forecast errors, an increase in the extent to which forecasts are revised in response to earnings surprises, and an increase in the earnings response coefficients for the user firms. This evidence suggests that market participants recognize that earnings are a more transparent measure of the results of the firm's core operations after they learn of the start of hedging activities.The time period (1985-1995) is advantageous for the research question addressed in this study because market-wide, the first disclosures of interest rate hedging activities were made in the late 1980s, making the pre-disclosure period clearly distinct from the post-disclosure period. This ten-year period precedes the current US GAAP financial reporting guidance of SFAS 133, and accordingly, the financial reporting guidance in place over the sample period (SFAS 105, 107) required only minimal disclosure of derivative activities, most notably the type of instrument and the gross notional amount of derivative instrument contracts in place at year end. These initial, albeit minimal, disclosures of hedging activity are used in this study to identify user firms and to divide the time period between pre- and post-disclosure. The tests examine changes in analysts' reactions to earnings announcements after the firm has disclosed hedging activities and the results indicate that analysts' reactions are changed in ways consistent with predictions. However, the level of detailed information provided in the disclosures is not measured or used in the tests, so there is no loss of relevance of the results due to the minimal level of disclosures. The expectation is that similar tests using a post-SFAS133 time period would result in consistent, but stronger results than those found in this study because of the increased information provided to analysts.The paper proceeds as follows. Section 2 presents a brief review of the theoretical motivations for managers' hedging activities and develops empirical predictions regarding the impact of hedging activity on earnings predictability, forecast revision behavior, and the earnings response coefficient. In Section 3, the sample selection process is explained. The empirical tests and results are found in Section 4 and Section 5 concludes with directions for future research.\n2. Empirical predictions: Consistent with the Palepu et al. (2004) idea that transitory shocks mask a firm's performance, theoretical explanations for firm level hedging include information asymmetry theories (Demarzo and Duffie, 1995). Demarzo and Duffie (1995) use an information asymmetry model where managers have private information about the source and magnitude of the risks faced by the firm and may have private information about the hedging positions. The model develops an equilibrium where investors support managers' hedging because the decrease in the firms' expected profits is offset by the benefits of reduced noise in the investors' information sets regarding the variance of firm payoffs. When hedging activities are disclosed, shareholders are better able to associate manager ability with firm performance. In the DeMarzo and Duffie (1995) model, managers will in some cases actually choose not to completely hedge transitory shocks if full disclosure of hedging activities is required since shareholders can then completely observe manager performance. The model, however, predicts maximal hedging as an equilibrium policy when the information environment closely corresponds to US GAAP in place at the time of their writing and during the time period used in this study (pre-FAS133 and pre-SEC risk disclosure requirements).Empirical evidence in support of DeMarzo and Duffie's (1995) theoretical model can be found in Dadalt et al. (2002) where analyst forecast accuracy and forecast dispersion are found to be supportive of the reduction in information asymmetry proposed by DeMarzo and Duffie (1995). In this study, I also examine analyst forecast accuracy, but in addition provide more direct evidence of analysts' use of earnings surprises following disclosure of the implementation of hedging activities. I also provide evidence of investors' reactions by examining the earnings response coefficients following disclosure of the implementation of hedging activities.Hedging reduces the risk in the earnings stream because potential sources of variability are eliminated. Prior work has shown that analysts' earnings forecast accuracy increases (decreases) as variance in the firm's earnings stream decreases (increases) (see Das et al., 1998). Prediction 1 (P1) follows from these two statements:P1. Disclosure of the implementation of hedging activities will lead to an increase in analysts' forecast accuracy.\n3. Sample selection: The predictions in this paper assume that firms that disclose hedging activities are, in fact, reducing firm risk. The findings of Guay (1999) and Nguyen and Faff (2002) suggest that this is usually the case[1]. Similar to earlier work of Guay (1999) and Dadalt et al. (2002), this study uses a sample of user and non-user firms to test the empirical predictions.3.1 Identifying the \"user\" firms\n4. Empirical tests: 4.1 Test of earnings predictability\n5. Conclusion: Prior literature has focused primarily on the firm's decision to use derivatives or the firm's motivation for using derivatives. This study assumes that the derivatives choice was rationally made, in accordance with existing hedging theories and earlier empirical findings, and examines the risk management strategy from the investors' viewpoint. In order to isolate the impacts of hedging activities, this study uses a carefully chosen sample of non-user firms as a control for both macro-economic changes and for the derivatives choice factors identified in prior research. The results indicate that analysts and investors revise their beliefs about properties of the earnings stream when they learn of the firm's implementation of hedging activities. Implementation and communication of hedging activities to mitigate transitory shocks was found to be associated with increased earnings predictability, increased information in earnings surprises, and increased earnings response coefficients.Interestingly, these results were found in a time period preceding the more detailed risk strategy disclosure requirements of SEC Financial Reporting Release No. 48 or Statement of Financial Accounting Standard No. 133. While recent research has documented weaknesses in firms' disclosures under the SEC requirements (Roulstone, 1999), the results of this study suggest that financial statement users were able to detect an effect of hedging on earnings, even though the strategies did not have to be described in the level of detail required by SFAS 133[11]. Future research could revisit these tests for the time period following the more detailed disclosure requirements as a way of evaluating the usefulness of the additional disclosures.\n",
"role": "user"
},
{
"content": "Purpose: - This study aims to examine how market participants changed the way they process earnings information after learning of the implementation of hedging activities.\nDesign/methodology/approach: - Using a sample of derivative user and non-user firms, this study empirically compares earnings predictability, forecast revision behavior, and the earnings response coefficients before and after the disclosure of sustained hedging activity.\nFindings: - The findings indicate that analysts' forecast accuracy increased and that unexpected earnings were incorporated into subsequent earnings forecasts to a greater extent subsequent to disclosure of sustained hedging activity. Additionally, the findings indicate an increase in the earnings-return relation in the hedging activity period.\nResearch limitations/implications: - This evidence empirically supports the claim that, when a company communicates that hedging activities have been started, market participants are better able to forecast earnings and view subsequent earnings announcements as providing greater information about future earnings. The results may be understated due to the minimal disclosures required during the sample period. Future research could revisit these tests for the SFAS 133 time period as a way of evaluating the usefulness of more detailed disclosures.\nPractical implications: - The models used in the tests of forecast revisions and earnings response coefficients could easily be adapted to other settings where the research question compares different time periods.\nOriginality/value: - This study adds to the empirical evidence regarding the effects of hedging activity by providing direct evidence of analysts' use of and investors' reactions to earnings surprises following the disclosure of the implementation of hedging activities.\n",
"role": "assistant"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.