title
stringlengths
6
301
keywords
listlengths
0
23
messages
listlengths
2
2
Improving innovation and customer satisfaction through systems integration
[ "Integration characteristics", "Innovation", "Customer satisfaction", "ISO 9001", "ISO 14001, Spain" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1 Introduction: In a remarkably short time, economic globalisation has changed the world's economic order, bringing with it new challenges and opportunities. Innovation merits a special attention nowadays, given its importance for firms in order to adapt to the global market and to provide customized solutions to consumers. Customers are becoming more sophisticated, segmented and demanding, and expect more in terms of customization, newness, quality and price (Stark, 2011). To adapt to these customer's needs, organizations need to provide quality standardised products that are innovative and environmentally friendly (Stark, 2011). In this context, the management of innovation, customer satisfaction and the implementation of quality and environmental standards in organizations are considered essential for organizations to compete in the markets.This research is an important step forward in finding out the impact of systems integration on innovation and customer satisfaction. This research wants to explore how the integration of management systems relates to the process of innovation and how the innovative products or services will satisfy the customer demand, needs and requirements. Systems integration can be considered as a type of organizational innovation (Llach et al., 2011), thus, there exists a strong relationship between these two constructs. Moreover, innovation increases customer satisfaction as it maximizes the value of their purchases. In order to satisfy their customers, companies need to be innovative and to this, they have to manage the different systems in the organization effectively. Therefore, firms should be committed to managing integrated management systems (IMSs), innovation, customer satisfaction and the existing interrelationships among them.In this context, the aim of this study is to propose scales to measure MSs integration characteristics, customer satisfaction and organization innovation as well as to critically analyse if there is any relationship among these constructs in order to help organizations to better understand and manage these elements. This yields inside into the attributes of the integration characteristics that an organization should focus on to achieve the goal of customer's satisfaction and organization innovation and hence increase organization performance.\n2 Literature review: 2.1 Innovation, customer satisfaction and management system standards\n3 Methodology: 3.1 Questionnaire and measures\n4 Data analysis and results: Given that outliers often have a dramatic effect on the fitted model, the univariate skewness and kurtosis were computed to test the normality of each variable used in the model. The results (see Table I for the items to which here are referenced with only the code) ranging from -1.037 (B3) to 0.493 (I10) for skewness and from -1.107 (I9) to 2.286 (B3) for kurtosis, were within the maximum limits of an absolute value of two for skewness and seven for kurtosis, as recommended by Curran et al. (1996). Besides it was clear that most variables were slightly off-centre.4.1 Assessment of the scales to measure the beneficial characteristics from integration and innovation\n5 Discussion: Before answering to the key contribution of the study, we first assess the credentials of the key dimensions to evaluate MSs standards integration characteristics and organization innovation and satisfaction. We used an EFA and a CFA to ascertain whether the scales have a factor structure that reflects the theoretical dimensionality of their setting. Thus, this study emphasises four sets of determining dimension factors for the integration characteristics namely: \"better use of the systems\", \"system performance\", \"organizational strategic\" and\" internal cohesion\". Moreover, our analyses have shown fairly conclusively that the four dimensions configuration initially proposed for innovation in the organization do not fit the data set. The dimension of \"product innovation\" was discarded since all the items were loading very poorly. Hence, only three dimensions: \"process innovation\", \"organization innovation\" and \"marketing innovation\" were retained. It was not a surprise that managers attached more weight on process innovation rather than on product innovation, given that the extant literature supports the notion that some process innovations can completely revolutionize the way a product is produced and it is a type of innovation that significantly reduces costs (Dess et al., 2006). Moreover, product innovation may remain a source of competitive advantage but only for a short period of time, since competitors often tend to imitate the new and successful product innovations (Zhang et al., 2004). Organization and marketing innovation were also given more importance than product innovation by managers, as they are becoming the new sources of innovation and competitive advantage for companies (Qingyu et al., 2004).The results of the theoretical model show several significant effects from the integration characteristic \"better use of the systems\" on innovation and satisfaction. This would mean that the introduction of these systems, such as ISO 9001 and ISO 14001, could be considered itself as an innovation, therefore contributing to a positive impact between these variables as suggested by Camison et al. (2007) and Llach et al. (2011).Results also show a positive effect from \"system performance\" and \"organizational strategic\" groups on \"organizational innovation\". In this case, more efficiency and a clear strategy when managing the systems would result into greater innovation regarding the organizational methods used in the companies. In this sense, strategic factors in the organizations, such as systems integration, have been found to condition organizational innovation (Helfat and Raubitschek, 2000).The positive effects from \"internal cohesion\" on \"organizational\" and \"marketing innovation\", show the effects of softer aspects of the organization (relationships among employees, communication between departments) on softer kinds of innovation (organizational and marketing), whereas the \"internal cohesion characteristics\" do not show a significant effect on \"process innovation\". The influence of aspects such as internal communication or internal cohesion among employees and departments in an organization has been studied and related to softer types of innovation such as organizational and marketing, because they foster new innovative ideas in the organization and can become a driver for integration (Welch and Jackson, 2007).The following integration aspects, namely \"better use of the systems\", \"system performance\" and \"internal cohesion\", were found to have a significant impact on \"satisfaction\". These effects would mean that organizations having higher efficiency and ability to use the systems as well as more motivated employees and more communication between departments, would have more customer satisfaction. These results are consistent with the findings of Simon et al. (2012) and Bernardo et al. (2012), who found that the most important aspects of IMS were related to employee's motivation and relationships.Finally, we find a positive effect between \"marketing innovation\" and \"satisfaction\", meaning that customers would be more satisfied if they perceived the new ways of presenting and selling products. This finding is coherent with the literature on the innovation's impact on customers because firms are more likely to prioritize customer demands during marketing innovation than during product innovation. During product innovation, firms fear the cannibalization of existing products (Tushman and Anderson, 1986) and that customers will not receive the direct benefits of this innovation whereas they believe that marketing innovation is directly perceived by customers (Gordon, 2006).\n6 Conclusions: This study, which aims to shed light on the relationships between integration characteristics, customer satisfaction and innovation, contributes to narrowing the gap in the field of MSs integration by providing insights into how the management of certain aspects of IMS can enhance innovation and customers satisfaction, thus increasing firm's performance.The objectives of this study were threefold. First we aimed at studying the existing relationships between integration and customer satisfaction. The second objective was to analyse the possible relationship between integration and the level of innovation of organizations. Finally, our H3 tested the relationship between innovation and customer satisfaction. In order to explore these effects, a proposed model was presented and tested by means of SEM.The first conclusion to be drawn from the study is that the beneficial characteristics of integration can be grouped in four large clusters, which are \"better use of the systems\", \"system performance\", \"organizational strategic\" and \"internal cohesion\". This last group of characteristics is the one to receive the most comments in the literature. For example, the attitude and motivation of people is mentioned in Zeng et al. (2007). Therefore, we can conclude that the attitude and motivation of people when working with the MSs plays a vital role during the process of integration of these systems within the organizations studied.We have found three innovation constructs, namely, \"process innovation\", \"organization innovation\" and \"marketing innovation\". This classification is similar to the one proposed by the OECD (2005) in the Oslo manual, which include these three types of innovation but adds product innovation, which we do not find in our study.Finally, the clustering pertaining to customer satisfaction has one construct: \"satisfaction\" and it is composed by four variables which are based on the ECSI model (O'Loughlin and Coenders, 2002).As we have been able to find very few studies showing the relationships between the integration beneficial characteristics, the innovation level and customer satisfaction (Nemati et al., 2010), we used cluster results to explore these effects by means of SEM. Four integration characteristics, three innovation and one customer satisfaction dimensions, as well as the relationships among them, were proposed.Overall results show all the hypotheses were either totally or partially supported. We found positive impact from integration characteristics to innovation. In our model, all the dimensions of integration characteristics had an impact on \"organization and marketing innovation\". Specifically, \"better use of the systems\" is the strongest predictor for both \"organization innovation\" and \"marketing innovation\", followed by \"internal cohesion\", \"organizational strategic\" and \"system performance\". This shows that a better understanding and use of the systems conducts the organizations to increase their innovation levels. Therefore, both internal and strategic factors that condition organizational innovation (such as the management of systems) need to be taken into account by managers, who should strengthen different strategic factors and internal capabilities to achieve an adequate level of organizational innovation and thus improve performance and encourage innovation (Helfat and Raubitschek, 2000).A significant positive effect from of integration characteristics on satisfaction was found. Specifically, all three dimensions of integration characteristics (system performance, internal cohesion and better use of the systems) were significantly related to satisfaction, demonstrating a strong relationship between the integration of MSs and their associated characteristics with the satisfaction of the firms' customers. As one of the main components of MSs is customer satisfaction, companies that implement them have the need to be innovative in developing and launching new products or services to match the customers' needs and requirements. It is clear from our study that MSs implementation and integration produce positive results for customers, increasing levels of customer satisfaction (Heim and Sinha, 2001; Nagar and Rajan, 2005). As MSs particularly stress the importance of customer focus it is important that companies manage these systems properly in order to benefit from their customer perspective and become more customer oriented. Stressing the need for using MSs to enhance customer focus can help businesses shift their focus toward customer satisfaction (Mehra et al., 2001).The influence of innovation on customer satisfaction was partially supported as the double interaction between \"process innovation\" and \"organization innovation\" on \"satisfaction\" was not found to have a positive effect. Instead, only \"marketing innovation\" had a significant effect on \"customer satisfaction\". These are not surprising effects taking into account some results found by Pan and Zinkhan (2006) who found that the willingness to innovate of a company did not necessarily turn into positive customer satisfaction results. The existence of a link between marketing innovation and customer satisfaction rather than between other types of innovation and satisfaction is supported by several authors who believe that marketing innovation in easily perceived by customers, therefore, making firms more willing to invest in it (Gordon, 2006). While such a focus upon customer satisfaction through building more direct relationships may enhance the marketing innovation efforts of a company, it is quite possible that there are unintended consequences associated with a heightened concentration upon this type of innovation, such as a firm's decreased willingness to invest in product, process or organizational innovation (Chandy and Tellis, 1998).The findings of the study have several practical implications for mangers of firms with more than one implemented MS. As more and more systems are introduced in the organizations and their management becomes more complex, many organizations are simplifying their MSs. Therefore, the integration of MSs helps organizations manage their MSs as well as incorporating innovation and customer satisfaction as part of their systems. Shaping strong positive relations of an organizations' MSs with the innovation process and customer satisfaction, can be a source of essential knowledge in the aspect of improving management processes, customer processes and improving products and services.It is apparent from our results that integration is of the greatest importance in generating customer satisfaction and innovation. This shows that managers must ensure that they manage their quality and environmental MSs by providing high quality products which are environmentally sustainable at all times if they wish to enhance customer satisfaction and innovation in the firm. In particular, the study has revealed that \"better use of the systems\" is the most critical predictor of process, organization and marketing innovation. Managers should therefore place greater emphasis on managing the IMS efficiently by improving the systems understanding and use and by providing better options to include new systems, as suggested by Simon et al. (2012).Second, although previous research has suggested that organizational strategic and system performance issues are the most important dimensions for predicting innovation (Helfat and Raubitschek, 2000), the present study found that \"internal cohesion\" was the most important factor in fostering innovation. This suggests that managers of IMS should ensure that the climate and communication among employees are dealt properly in order to ensure that the organization's objectives are aligned with those of the employees. Managing these internal aspects in the organization can make employees aware of their role in the process of continual improvement of formalized product quality, innovation and customer service, leading the organization to total quality management and business excellence.Finally, customer satisfaction has become a vital concern for companies and organizations in their efforts to improve product and service quality, and maintain customer loyalty within a highly competitive market place. Because we find that only marketing innovation relates to customer satisfaction, managers should adopt innovative marketing strategies to enhance customer satisfaction, especially with regard to optimizing perceived product and service quality to meet and exceed customers' expectations.Although the findings provide meaningful implications for organizations implementing MSSs or MSSs integration, there are several limitations which should be addressed in the future. The main limitation of our study is the sample size, 76 organizations, which could be the cause that some of the hypothesized relationships are not significant. Furthermore, the sample in which this study is based was drawn from a single country, Spain, which may limit the generalization of the results. Finally, the responses analysed only reflect the points of view of the company managers and not of other involved actors such as the auditors, the employees or the customers. If this had been the case, the richness of the data gathered would have been higher and therefore, the conclusions drawn for the study would have been more representative of the reality of these organizations.Given the answers regarding the beneficial integration aspects experienced by organizations, their innovation level and the customer satisfaction, future research could focus on identifying the relationship between these variables and financial performance measures. It would also be interesting to study how the perception of firms regarding these interactions evolves over time.\n", "role": "user" }, { "content": "Purpose: - In recent years, organizations have been forced to compete in a new environment and to become more innovative, provide more quality and respond more effectively to consumers' needs and preferences. Within this context, the main objectives of this research are to propose scales and study the existing relationships among innovation, Management System Standards (MSSs) Integration and customer satisfaction in order to help organizations to manage these elements and increase their performance.\nDesign/methodology/approach: - Data for this study derives from a survey carried out in 76 Spanish organizations registered to at least both ISO 9001:2008 and ISO 14001:2004. An Exploratory Factor Analysis (EFA) and Structural Equation Modelling (SEM) are utilised to assess and confirm the proposed scales validity and the relationships of the research model.\nFindings: - The conceptual model finds significant support based on the empirical study. Three of the four dimensions of innovation and the four dimensions of MSs Integration are confirmed. Besides, the findings show that the integration characteristics are positively related to innovation and satisfaction, whereas, innovation is only partially linked to satisfaction.\nOriginality/value: - This study, which aims to shed light on the integration characteristics, is the first to propose a model that links the three concepts of integration, customer satisfaction and innovation together. These are relevant issues for the competitiveness of companies, willing to increase their performance, especially for companies which have implemented several ISO based systems, which have become a key part of the organisation's lifeline and a prerequisite for survival in the twenty-first century.\n", "role": "assistant" } ]
Appreciating formal and informal knowledge transfer practices within creative festival organizations
[ "Appreciative inquiry", "Ethnography", "Social practice theory", "Creative festival organizations", "Knowledge transfer practices", "Not-for-profit organizations" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Fixed graphic\nIntroduction: Not-for-profit creative festivals face particular challenges with regard to practising knowledge transfer within a dynamic organizational context and short time frames for the production of cultural events (Abfalter et al., 2012; Ragsdell et al., 2014; Stadler et al., 2014). While many not-for-profit organizations experience negative impacts from time pressures and high staff turnover, little attention has been paid to how festival organizations successfully create cultures where knowledge transfer and collaboration occur within the high intensity of a festival life cycle. In contrast to the problem-solving and psychological approaches to research that have dominated the event management field, this article draws upon appreciative inquiry (AI) to understand how knowledge transfer \"works\" between organizational actors as formal and informal practices that produce a collaborative festival culture. Our purpose in this article is to identify how organizational \"know-how\" was enacted through the interrelationships between formal and informal transfer practices within a case study of the Queensland Music Festival (QMF). We examine how the knowledge and practice know-how that enables collaboration is transferred not only in formal settings such as meetings but also through informal contexts that can be conceptualized in terms of organizational practices where work is performed, ordered and interpreted (such as lunchroom conversations). We suggest that formal and informal contexts of knowledge transfer are best understood within the dynamics of organizational culture rather than as an individualized exchange of information between employees. As Erden et al. (2014, p. 712) explain, \"Social practices are situationally constituted, recurrent, materially bounded, and shared forms of social activities that produce and structure life in organizations\".\nKnowledge transfer practices in creative festival organizations: Not-for-profit festival organizations experience unique challenges with regards to knowledge transfer practices. Knowledge transfer is often used interchangeably with knowledge sharing; however, knowledge transfer implies a more complex understanding of how actors interpret and apply what they have learnt from others; it also involves the \"acquisition and application of knowledge by the recipient\" (Wang and Noe, 2010, p. 117). We argue for a greater conceptual emphasis on the organizational context that shapes \"how\" knowledge transfer is practised rather than a more technical, psychological or functionalist view of information exchange. In this way, we draw upon the insights of social practice theory (Feldman and Orlikowski, 2011; Reckwitz, 2002) that locates the construction of meaning in the performance of organizational tasks and relationships where actors are connected through tacit and explicit meanings, competences and materials. We propose a practice-based understanding of knowledge transfer, where practices:\nAppreciative Inquiry: Festivals create celebratory experiences of culture and community through the performance of traditions and new practices of engagement and creativity. AI is an approach (both a specific methodology and broader social constructionist epistemology) within organizational studies and management that identifies how stories of success and achievement can contribute to understanding how culture and identity are co-constructed by organizational actors. By identifying stories about positive experiences, AI aims to identify what is already working well in an organization and, at the same time, challenges and problems are framed as opportunities for learning (Cooperrider and Whitney, 1999). AI is based upon social constructionist underpinnings that Cooperrider and Whitney (1999) identify in terms of five principles:\nMethodology: An ethnographic case study with the Queensland Music Festival (QMF) in Australia was conducted from February-August 2011. The QMF is a 17-day-long biennial music festival that is centrally organized in the city of Brisbane and performed through numerous events held across the city and more distant, regional communities in Queensland. With multiple sponsors and other sources of funding the festival works through community cultural development principles to realize a vision of \"transforming lives through unforgettable musical experiences\". QMF staff members are required to work with different communities over an extended period of time and encourage their participation in performing at a local arts event. The festival is managed by a permanent paid staff of 7 people and supported by another 35 production, administrative and marketing professionals, as well as over 2,000 international, national and community-based artists during each festival season (QMF, 2011). It, thus, represents an organizational structure typical of large festivals, and faces the challenge of bringing festival members with various backgrounds together for a concentrated period of time. The research was approved by the QMF executive director, QMF board of directors and Griffith University Human Ethics Committee.\nFindings: In regards to the question of how knowledge transfer was practised within QMF, we found that staff members engaged in both formal and informal work practices that enabled effective collaboration which, in turn, was crucial to the success of the festival. Staff members, however, mainly took these practices for granted as a tacit part of their day-to-day work and did not explicitly identify them through the language of \"knowledge transfer\". Yet, there was an overwhelming range of stories that emphasized the collaborative \"know how\" that staff brought with them from other festival work or developed through their involvement in QMF over time (sometimes several seasons). We outline the formal practices through which staff developed a shared festival knowledge (understanding their own role in relation to the greater whole), such as how central staff team meetings were managed to facilitate the performance of specific and collective identities (projects, actors and festival vision) over time. Organizations typically create information-sharing rituals through meetings (Smith and Stewart, 2011); however, they are not necessarily effective ways to formalize knowledge, make tacit knowledge explicit or generate new knowledge through collaboration. Staff meetings are organizational practices that can elicit frustration, boredom and power-knowledge struggles over the representation of key issues, resourcing, strategy and interpersonal differences. In the not-for-profit sector, in particular, organizations frequently face funding and time pressures as well as high turnover of voluntary staff and hence knowledge fragmentation that can exacerbate problems related to effective knowledge sharing (Lettieri et al., 2004; Hume et al., 2012; Ragsdell et al., 2014). Through the research, we also identified the significance of informal practices that facilitated the interpretation of information and the performance of trust relationships that underpinned knowledge transfer. For example, the practice of individuals from different teams having lunch together allowed differences to be identified and resolved while generating mutual support through pleasurable shared activity (eating). In a time-pressured context, the informal opportunities for interaction, relationship building and sharing of past/present experiences were importantly interrelated with the formal organizational practices in fostering understanding of \"how things are done within QMF\". The management of the festival meetings, strategy and teams facilitated a culture of collaboration that effectively mobilized organizational formal and informal practices - rather than relying upon either individual agency (actor responsibility for jobs) or organizational structure (managing through hierarchies of expertise). In the following sections, we explore how organizational actors and the ethnographer transferred knowledge between formal settings and also informal contexts. Recognizing the role of informal practices enables a deeper appreciation of how knowledge sharing is performed and embodied as a collaborative practice that requires the ongoing interpretation and discussion of ideas that are refined, accepted or rejected outside of the formal context (Lahti et al., 2002; Allen et al., 2007). Yet, the importance of these informal practices is oftentimes overlooked in the high-pressure work environments of not-for-profit organizations, despite perennial issues such as stress and burnout. The findings reveal how an AI approach allowed us to identify important meaning-making aspects of such informal practices.\nSummary of findings: In contrast to a conventional problem-solving approach within knowledge management that seeks to diagnose problems and offer external solutions, our research focused on appreciating the cultural context that shaped the formal and informal knowledge transfer practices that supported collaboration. In this way, we contribute to furthering understanding of how festivals as not-for-profit organizations actively support (or impede) the development of collaborative relationships within time-pressured and resource-limited contexts. While many not-for-profit organizations experience negative impacts from such pressures, the QMF is one example of how a festival organization has developed knowledge transfer practices that facilitate staff engagement, trust, care and collaboration. We identified a range of formal and informal organizational practices that created a means to connected diverse actors at the micro level of work, while also generating a macro level, shared identification with a festival vision that linked the diverse programme of events. While organizational conflicts and communication problems were evident, the use of an AI framework enabled a more nuanced focus on how festival staff in the central and project teams negotiated differences in expertise and skill to generate opportunities for learning and reflexive thinking. AI pays attention to what is already working in organizational cultures and how organizational actors construct meaning through their interactions and work relationships; hence, we argue that using an AI approach to festival research also reflects an ethos of creating knowledge through democratic processes.\nLimitations: While we have identified key knowledge transfer practices that have contributed to the success of the QMF, we also note with caution the danger of simply applying such insights to all not-for-profit organizations. Thatchenkery and Chowdhry (2007, p. 154) describe the necessity of developing a context-specific understanding of the relationship between organizational cultures and knowledge transfer practices:\nImplications for practitioners and researchers: The not-for-profit sector brings numerous challenges for festival organizations and there is a need to appreciate how collaborative and creative knowledge transfer works to dynamically shape organizational life and occurs through both formal and informal practices. Hence, we theorise our findings in relation to three key principles informed by social practice theory:\n", "role": "user" }, { "content": "Purpose: Problem-solving approaches to research have dominated the not-for-profit festival management field. Little attention has been paid to how festival organizations successfully create cultures where knowledge transfer is practised within the high intensity of a festival life cycle. Drawing upon insights from social practice theory and appreciative inquiry (AI), the purpose of this paper is to offer a different conceptual approach to understanding how knowledge transfer \"works\" as an organizational practice to produce a collaborative festival culture.\nDesign/methodology/approach: This paper draws upon an ethnographic case study with the highly acclaimed Queensland Music Festival organization in Australia. The research questions and methods were framed around an appreciative approach that identified formal and informal practices that\nFindings: This research focused on appreciating the cultural context that shaped the interrelationships between formal and informal knowledge transfer practices that enabled trust and collaboration. A range of knowledge transfer practices was identified that contributed to the creation of a shared festival ethos and the on-going sustainability of the festival vision.\nPractical implications: The not-for-profit sector brings numerous challenges for festival organizations, and there is a need to appreciate how collaborative and creative knowledge transfer can occur formally and informally. Festival organizers can benefit from understanding the relational and practice dimensions of knowledge management as they are performed within specific organizational contexts.\nOriginality/value: An appreciative understanding of knowledge transfer practices has not yet been applied to not-for-profit festival organizations, where problem-solving approaches dominate the field.\n", "role": "assistant" } ]
The influence of intellectual capital on export performance
[ "Intellectual capital", "Exports", "Competitiveness", "Human capital", "Competitive advantage", "Bosnia and Herzegovina" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: For over two centuries economists have been trying to find ways to better manage scarce resources. The traditional economic model is based on the assumption of scarcity, in which supply and demand determine market price. However, knowledge as a resource is not based on the principle of scarcity. The higher supply of knowledge, the greater is its value. As an illustration, at the beginning of the twentieth century, the total amount of information in the world doubled every 30 years. In the 1970s, that number was reduced to seven years. Today, this time is reduced to less than 18 months.What is even more important is that the market has recognized the value of knowledge and other intangible factors long ago. That is why companies often worth more than its book value. Handy (1989) cited the results of his research which found that the value of intellectual capital is usually three or four times greater than the book value of the company. Also, according to a research conducted by Baruch Lev (2001), intangible assets i.e. intellectual capital now represents between 60 and 75 percent of the real value of the firm.This means that effective and efficient management of intellectual capital is becoming the only way to create a competitive advantage. The empirical evidence suggests that this approach can be a basis for accelerated economic growth and development. This approach implies that competitiveness of individual companies and entire economies is not conditioned by the material resources at their disposal - natural resources, physical and financial capital. Instead, it is conditioned by the quality of human and intellectual capital - quality and quantity of knowledge, innovation, creativity, etc.One of other key factors necessary to accelerate the economic development of a country is growth of exports, particularly in knowledge and technology intensive industries. Analyzing the many theoretical and empirical findings in this area, which represent the basis for this research, some general global trends should be noted:* Increasing importance of exports and investment. The value of world exports since 1980 has increased several times, and the highest rates of GDP growth have those countries that have the largest increase in exports.* Growth in the global mobility of factors of production, which represents a potential increase in the similarity of production capacity around the world. Moreover, rapid transport and communication increasingly diminish the role of location as a factor of competitiveness.* Technology is becoming a key factor in the development, and mass production based on cheap labor, raw materials and energy is losing its significance.* Industries based on knowledge are becoming increasingly important as a result of technological development. Knowledge and skills, and human resources are becoming key elements of competitiveness.Most findings suggest that the twenty-first century will be a period of global competition in which national borders will play a minimum role in terms of availability of products and services. Economic and technological globalization that we have already witnessed has forced countries to stimulate the creation of internationally competitive products and enterprises, and to create an investment climate that will attract internationally competitive businesses and industries. This is now becoming a key factor of economic growth and development of countries, regions, cities. However, what is particularly important is that the focus of export competitiveness in the global economy shifts from low-cost labor and natural resources to the technological content, quality, design, etc. The most important flows of international trade are dominated by products and services with higher share knowledge i.e. intellectual capital in their creation.The purpose of this research was to analyze the impact of intellectual capital on export performance i.e. international competitiveness and to evaluate possibilities of creation of intellectual capital based industrial policies. The structure of this paper is as follows. After the introduction, the second part of the paper elaborates theoretical basis for this research. This is followed by the research objective and hypotheses in the third and the methodology and data in the fourth part. Results and discussion of results are shown in the fifth section and there are some concluding remarks at the end.\n2. Theoretical basis: The relation between human factor and export performance has received widespread attention in the economic literature. Many studies have explored this relationship, among others Gomez-Mejia (1988), Cavusgil and Zou (1994), Levin and Raut (1997), Zou and Stan (1998), Grasjo (2005), Contractor and Mudambi (2008), Andersson and Johanson (2009), etc. These studies use aggregate data for human capital such as average years of schooling, the number of highly educated population, investment in human capital, etc. One of the most important researches in this field, the research by Levin and Raut that takes into account the data from 45 developing countries, found that there are significant complementarities between export performance and human capital. Similar results were obtained in research conducted by Contractor and Mudambi (2008) on the sample of 25 countries. Also, research carried out by Grasjo (2005) and Andersson and Johanson (2009), instead the country level, use data on human capital and export performance at the regional level and suggest that the higher the human capital of a region, the higher the level of exports and the average price of exported goods (as an indicator for the structure of exports) is also higher.However, the relation between intellectual capital and export performance is still insufficiently explored. According to Bontis (1998), the term \"intellectual capital (IC),\" was first introduced by John Kenneth Galbraith, but the theory of intellectual capital (IC) began to be more present in international public during the 1990s of the last century. The difference between human capital and intellectual capital is in the fact that intellectual capital is not just knowledge and skills that can be acquired by learning and training. It represents an active transformation of knowledge into a new value, value-added products or services. The study of intellectual capital means researching intangible assets. Therefore the key problem in this area is its measurement. Unfortunately, the fact that it is intangible, regardless of the simplicity of the concept, becomes a problem for researchers when it is necessary to measure it, because it is difficult and expensive. This is probably the other reason why the relation between intellectual capital and export performance is still insufficiently explored.The only two studies on relation between intellectual capital and export performance that were found during the review of literature for this research were Moslehi et al. (2006) and Zerenler and Gozlu (2008). In this regard, one of the most similar research studies to this research is the research by Zerenler and Gozlu (2008). Their research used a relatively complicated and expensive survey method for measuring intellectual capital. Intellectual capital of the cluster supplying auto-industry of Turkey (companies in the field of metal industry, electrical industries, production of plastic, etc.) was measured by Direct Intellectual Capital method, based on questionnaires, and Zerenler and Gozlu examined its impact on export performance. The findings of their study clearly confirm that the export performance of enterprises is under powerful influence of intellectual capital, especially in high-tech sectors.Fortunately, there are some methodologies that measure IC in an equally effective, but more simple and less expensive way. One that is often used is the Value Added Intellectual Coefficient methodology. The Value Added Intellectual Coefficient (\"VAIC\") developed by Ante Pulic, is a method for measuring the performance of a company (Pulic, 2000a). In particular, VAIC measures the total value creation efficiency in the company. The better utilization of company's resources brings higher value creation efficiency of the company. Pulic (2000a) was the first one to test his methodology, on the sample of 250 Financial Times Stock Exchange companies and Vienna Stock companies. According to his results, there is a close relationship between the value creation efficiency of the resources, i.e. VAIC, and the market value of companies.Although there is no research of relation between intellectual capital, measured by VAIC, and export performance, there are many studies that explore the impact of VAIC on different kinds of performance of companies. Some of them will be mentioned here. Chen et al. (2005) found that intellectual capital measured by VAIC, shows significant positive influence on market returns, and also on current and future financial performance of Taiwanese firms. Tan et al. (2007) confirmed these results on the sample of companies located in Singapore. Mavridis and Kyrmizoglou (2005) used VAIC method to measure the performance of Greek listed firms under its intellectual (IC) and physical (CA) aspect. It confirms the existence of some added value-based performance differences and states \"that 'localized firms' are the distinct small technocratic, blue-collar intellectual performers while the 'globalized' ones are the large plutocratic, white-collar intellectual performers\".The research of 80 Taiwan technological companies (Shiu, 2006) showed that VAIC had a positive impact on profitability (ROA) and market value, and negative impact on productivity. Kujansivu (2006) found that the least efficient were companies in forest and chemical industries and companies in the electricity, gas and water supply sector were most efficient in utilizing their IC.Kamath (2008) calculated VAIC using the sample of 25 firms in the pharmaceutical industry of India, for the period from 1996 to 2006. He finds that VAIC results show that Indian pharmaceutical companies do well and efficiently utilize their IC. Basso et al. (2009) from Mackenzie Presbyterian University showed that, based on similar models, this influence was strong in technologically more advanced sectors in Brazil (manufacture of machinery and equipment manufacturing of cars, furniture production), while for the less technologically developed sectors this effect is negligible or nonexistent (textiles, clothing, leather and leather goods, etc.) Phusavat et al. (2011) researched the influence of value added intellectual coefficient (VAIC) on a performance of manufacturing firms. It finds that IC has a significant influence on return on equity, return on assets, revenue growth, and employee productivity.\n3. Research objective and hypotheses: The objective of this research is, on the one hand, scientific - to empirically analyze the influence of intellectual capital on export performance and explain the causal dependencies in the light of relevant previous research, and, on the other hand, pragmatic - to establish the benefits for policy making that may result from the research results.In this research only intellectual capital indicator (VAIC) and human capital indicator (HCE) are chosen to be independent variables. A structural capital indicator (SCE) and an indicator of capital employed (CEE) were not chosen although they were calculated as components of VAIC. The reason for this choice is the fact that most of studies in this area (Gomez-Mejia, 1988, Cavusgil and Zou, 1994, Levin and Raut, 1997, etc.) used human capital indicators such as average years of schooling, the number of highly educated population, investment in human capital, etc. The choice of only human capital indicator (HCE) among other VAIC components was an attempt to make this research more compatible with previously mentioned studies.First two hypotheses are based on the relation between level of coefficients of intellectual and human capital and export growth is explored. In essence here the intention is to see how past development of intellectual and human capital that has resulted in achieved level of coefficients, influence the growth of exports. It must be emphasized here that growth of exports is chosen because level of exports was favoring larger firms:H1a. There is positive relation between the level of Value Added Intellectual Coefficient (VAIC) and export growth of firms and sectors.H1b. There is positive relation between the level of Human Capital Efficiency (HCE) coefficient and export growth of firms and sectors.Other two hypotheses are based on the relation between growth of coefficients of intellectual and human capital and export growth. In essence here the intention is to see the influence of current growth of intellectual and human capital on the growth of exports:H2a. There is positive relation between growth of Value Added Intellectual Coefficient (VAIC) and export growth of firms and sectors.H2b. There is positive relation between growth of Human Capital Efficiency (HCE) coefficient and export growth of firms and sectors.\n4. Methodology and data: 4.1 Methodology\n5. Results and discussion: 5.1 Descriptive statistics\nConclusions: The most important contribution of this research is primarily its novel approach for measuring the impact of the coefficient of use of intellectual capital in creating additional value (VAIC) and its components to the export performance of firms, sectors and countries. The VAIC methodology was previously used to examine the effect of intellectual capital in general and in relation to financial performance of companies, sectors and countries. With this in mind, this research provides a scientific contribution by opening up a new possibility of using VAIC methodology, and this is the area of export competitiveness.There is also a significant resemblance of the results obtained, to the results of the Delegation of the European Union in B&H, from the project \"Analysis and Mapping the Value Chains in Bosnia and Herzegovina\", where EU experts in their one-year project had the task to identify the economic sectors in B&H, which are the competitive advantages. EU experts have used the well-known, and in practice proven methodology created by Michael Porter. It might be of relevance that this research in B&H, based on intellectual capital methodology, produced similar results and took significantly less time and money.Implications for researchers and practitioners\n", "role": "user" }, { "content": "Purpose: - The purpose is to analyze the impact of intellectual capital (IC) on export performance of firms and industries.\nDesign/methodology/approach: - This research used value added intellectual coefficient (VAIC) to measure intellectual capital as an independent variable. An export performance, as dependent variable, was measured as growth of exports. The sample consisted of 134 firms in Bosnia and Herzegovina (B&H). Empirical analysis was done by linear regression analysis.\nFindings: - The results of regression analysis show a significant (p<0.01), positive influence of the value added intellectual coefficient and its components on the export growth in the sector of food and beverages and manufacturing of furniture and wood products in B&H. For other sectors there is no significant relation of independent and dependent variable.\nPractical implications: - The results correspond with the results of the EU project that determined competitive advantages of B&H by Michael Porter's methodology. Results of this research raise the possibility of further testing of the author's methodology, called the measurement of intellectual capital in export performance (MICEP) methodology, in determining the competitive advantages, because it took considerably less time and money than EU project methodology. Also, a strong influence of IC on the export performance of sectors with competitive advantages opens the way for industrial policies based on intellectual capital, not only in B&H, but in other countries.\nOriginality/value: - This is the first research that has measured the impact of intellectual capital on export performance by using the VAIC methodology.\n", "role": "assistant" } ]
What happens if a business school disappears? The intellectual foundations of BSIS
[ "Stakeholders", "Social responsibility", "Communication", "Business schools", "Business administration" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: What would happen to the region around a business school if it were to disappear? In the current context of crisis for business schools, this question is no longer only a conceptual one. With business schools faced with growing legitimacy challenges, it is becoming a very relevant question (Cornuel, 2007; Thomas, 2007; Thomas et al., 2013). Indeed, some business schools have become increasingly weak and have, for instance in France, found their salvation through mergers and absorptions. Concomitantly, business schools have also had to face a lot of crisis-related criticisms where the viability of the dominant \"research-based\" business model has been questioned (Pfeffer and Fong, 2002; Lorange, 2010; Morsing and Sauquet, 2011). For those who think that this question is too provocative and not realistic enough, perhaps because they believe all business schools will survive (in spite of the failure of some business schools), let us reformulate our initial question: \"what would happen if - or what would be the consequences if - the size of your business school was reduced by 30% or if it was increased by 30%?\"\n2. Conceptual foundations: The managerial issue of impact is also a conceptual one. In line with classic grounded theory (Walsh, 2015), we conducted a literature review during and after an exploratory field study. However, the conceptual foundations are presented ex ante in order to facilitate understanding for the reader (Suddaby, 2006).\n3. Methodological issues: The methodologies of the study, of the criteria evaluation and of the BSIS process are discussed consecutively.\n4. The BSIS dimensions and areas: When analysing data from interviews, our analysis highlights a gap between what is thought to be a worthwhile impact and the existing assessment tools in a business school. On the one hand, the interviews with the key players in business schools usually cover a very large set of different impacts (e.g. business creation, support to companies, contribution to the communication of the image of the region, contribution to the employability of the students and to their entry into the job market, etc.). On the other hand, the existing assessment tools used by business schools mainly focus on one issue, either the financial one (e.g. counting money) or the research one (e.g. counting publications). In this context, BSIS was designed with a large spectrum of impact categories so as to offer an accurate assessment of a business school's overall impact.\n5. Conclusion: Such a data collection process is a cost for a business school that is both direct and indirect and the question of the return on investment arises. Based on the experience of 20 business schools (24 campuses) that have been through the BSIS process, we can identify five main benefits. The first three are internal and not always expected; the last two are expected because they correspond to the purpose of BSIS.\n", "role": "user" }, { "content": "Purpose: The FNEGE and then EFMD have been interested in defining a methodology that would make it possible to highlight the impact of a business school on its home territory. This has led to the development of the \"Business School Impact System\" (BSIS), which is introduced in this paper. The paper aims to discuss these issues.\nDesign/methodology/approach: The paper is structured as follows. The authors first present the methodological issues of the work that led us to develop the \"BSIS\" (EFMD, 2014). In a second section, the authors present BSIS in more detail.\nFindings: Based on the exploratory interviews and the literature review, the authors could first identify three main categories of impact: the financial impact; the impact on the regional community; and the impact on attractiveness and image.\nOriginality/value: The first benefit of BSIS is linked to the creation of an information system on impact. Second, the BSIS process increases the awareness inside the business school regarding the significance of this issue. Third, the authors observe that for numerous members of the business school the question of the impact of their activity gives them a better image of their job, of what they are doing and why they are doing it. Fourth, the BSIS report constitutes a powerful tool for communication with the stakeholders. The final benefit lies in the reviewers' recommendations on how to improve the impact of the business school.\n", "role": "assistant" } ]
Contemporary leadership in healthcare organizations: Fragmented or concurrent leadership
[ "Leadership", "Managers", "Health services", "Sweden" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1 Introduction: The present-day healthcare system is an organization that has undergone great changes and is still under pressure to increase quality and access, while at the same time accomplishing goals of cost efficiency. This blend creates special conditions for practising leadership. While leadership conditions of the line-organization are related to clearly defined tasks and areas of responsibility, leadership conditions within a process-oriented organization are more related to close-to-practice issues, shared responsibility, and boundary-spanning. Therefore, contemporary healthcare leadership seems to be shaped by certain complexities, which might create both opportunities and dilemmas. However, there are few studies addressing how healthcare leaders themselves experience and discourse on central, contemporary challenges and how they should be supported in healthcare organizations (HCOs). This study focuses on how the main contemporary challenges for healthcare leaders are experienced and discussed, and how these challenges can be supported.Healthcare organizational changes\n2 Method: 2.1 Design\n3 Results: The story line\n4 Discussion: 4.1 Support in fragmented or concurrent strategies for meeting everyday dilemmas\nConclusion: In this paper, we have proposed two leadership models for defining and differentiating ways of meeting different logics and demands made on leaders in the healthcare sector. The first model is leadership by separating different logics and fragmentation of time. The second model is leadership by integrating different logics and currentness of solutions. In practice, the two models co-exist but currentness of solutions - as a strategy in meeting different logics - is emphasized and has made fragmentation of time less evident. The leaders' strategies for meeting different logics are influenced by the discourse about change, leadership, and performance. Discourse and strategic communication provide the framework for leadership-employeeship interaction and, therefore, play an important role for contemporary leadership in the healthcare sector.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to gain a deeper understanding of the main contemporary challenges for healthcare leaders in their everyday work practice, and the support they need to master their experienced dilemmas.\nDesign/methodology/approach: - Qualitative in-depth interviews (n=52), and focus-group interviews (n=6) with 31 first-line and 45 second-line healthcare leaders are analysed in line with constructivist grounded theory.\nFindings: - In this paper, two leadership models are proposed for defining and differentiating ways of meeting different logics and demands made on leaders in the healthcare sector. The first model is leadership by separating different logics and fragmentation of time. Here, leaders express a desire for support in defining, structuring, dividing, and allocating tasks. The second model is leadership by integrating different logics and currentness of solutions. In this case, leaders want support in strengthening proactive leadership and shaping the basis for participative employeeship.\nResearch limitations/implications: - This research is designed to describe what people experience rather than to assess the frequency of that experience in the studied settings. However, it would be interesting to elaborate on the findings of this study using other research methodologies.\nPractical implications: - The findings contribute to contextual knowledge that is of relevance in supporting healthcare leaders. This is helpful in identifying important conditions that support the establishment of leadership and employeeship, leading to improvements in healthcare practice.\nOriginality/value: - The paper describes how contemporary leadership in the healthcare sector is constituted through different strategies for meeting multiple logics.\n", "role": "assistant" } ]
Efficient service production: service factories in banking
[ "Business process management", "Banking", "Customer service management", "Product positioning", "Service strategy", "Service positioning", "Service factory", "Efficiency", "Service production", "Service processes" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Banking industry is definitely among the service industries that have been thoroughly transformed during the past decades. The direction of development has been toward more efficient mass service production with reduced labour. ICT has been a major enabler in this development providing the facility to efficiently manage millions of banking transactions of many kinds with minimal human involvement. Consequently, the extensive branch office networks of banks have been reduced to far fewer, and their role has been refocused towards customer assistance, rather than providing the basic banking services for consumers. According to statistics, the workforce engaged in the banking service operations has been reduced drastically since 1970s. Obviously, the search for scale economies has been a major objective in the development of banking services. Generally in service business the pursuit for efficiency through economies of scale has not previously been as dominant as in manufacturing operations, where efficient large-scale factories are the major production mode. However, the similarities between manufacturing and service operations have called the attention of many researchers. The service factory proposed by Schmenner (1986, 2004) for efficient production of mass services, illustrates the parallels of service and manufacturing production.The development directions of services in general, and particularly in banking services, have been analysed with a large variety of approaches and models. The analyses of strategic positioning of services look into determining the right position of a service as compared to its competitors, and fulfilling the customer's needs. These studies have analysed the positioning of banking services, as well as, the structural changes in banking, including the delivery channel of services.Banking industry has been thoroughly transformed in a way that we can state: \"that the business of banking has changed considerably is an understatement\" (Boot and Marinc, 2008). There are many analyses of banking from strategic positioning viewpoint, and many of these studies look into the match of different types of banking services with their efficient production. Another approach has looked into to the operational efficiency of banking as industry from financial performance viewpoint. These studies have looked into the different factors influencing the financial performance and efficiency, or business process improvement (Climent et al., 2009).In this paper, we analyse the efficiency of service production in banking by applying the service process matrix of Schmenner (1986, 2004). Particularly the focus of analysis is on service factory production facilities for efficient production of mass services. The development paths of banking services are discussed both from strategic positioning, service operations and service process viewpoints.\n2. Service production and efficiency: There are numerous studies on service production and service efficiency. The large range of services existing has also received attention, and consequently different models and frameworks have been proposed to analyse the efficient service production for different service types. Also, as services are produced in service facilities, analogous to production facilities, the efficient combinations of service types and facilities have been analysed. The service facility types include, for example, service shops for customised services.Strategic positioning in service production\n3. Service factories in mass services: The term service factory has been used in connection to manufacturing (Chase and Garwin, 1989; Chase, 1991) to describe manufacturing operations, where services of different kinds play an important role. Particularly, the competitiveness of manufacturing is deemed to depend on broader range of services produced within the factory and its facilities. Outside manufacturing operations, the term refers to large-scale service production with features similar to mass manufacturing.Service factories in efficient service production\n4. Service factories in banking: Bank services have been analysed in the service process matrix, including personal banking, commercial banking and back-of-the-house operations. Similar service types have been recognised by many authors. To further analyse some relevant banking services, the range of services recognised in earlier studies are presented in Table III.It seems to be that basic services, such as over-the-counter branch office services, ATMs, corporate banking, and back office operations have been analysed by most researchers. Newer studies include also internet and home banking, as well as, mobile services. Investment services for consumers, including counselling, asset management, etc. have also been studied. Corporate and SME services are typically seen as highly professional services, where knowledge is a major factor in providing customer value.Adapting the service process matrix\n5. Discussion and conclusions: This paper analyses the efficient production of services, focusing on mass services. The framework applied is the service process matrix, which is used to analyse banking services. The findings are in line with those of Schmenner by recognising the particular challenges in service factories in the production and offering of mass services.Banking industry has changed tremendously during the last decades with the transformation of banks from branch office-based services to ICT-based mass services. The result is a definite move toward service factory-type operations. However, other strategic positions can also be profitable, as customer segments exist requiring highly professional and customised services. Similarly, efforts of standardisation have created a group of services in service shops, which are offered to customers requiring less customised services. The newer banking services include a multitude of internet-based self-services, ranging from home invoice payment to asset management. Finally, service factories have been recognised to include a wider range of services than previously. Within this quadrant there are operations with very fast service process times and very low service variety. The service process matrix of Schmenner has been adapted to better reflect the internal development inside the service factory quadrant.The analysis of service factory quadrant emphasizes the increased variety within the quadrant. This is mainly due to ICT and self-services. In many countries internet banking is now the dominant mode for consumers and has been for companies a reality for years. Consequently, mass services have become even more volume based, and drive for scale economies have become vital for banks focusing on mass markets. It has been emphasized (Hedley et al., 2006) that there are distinct differences in service capabilities of universal and ultra-focused banks, where universal banks enjoy scale economies in high volume services. Ultra-focused banks specialise by customer segments or service types with consequent scope economies. The lure toward service factory in mass services can be explained by this finding. The more customised services are then provided by more focused service processes, either by specialist players or dedicated units within banking corporations. There appears to be a classic strategic choice for banks between scale and focus. Accordingly, universal banks focus on \"super scale economies\", whereas focused banks specialise and differentiate. The service factory will be a powerful tool in achieving the scale economies. The service factory-type bank, however, need not be limited in its service range, but will develop services that have scale economies due to ICT. These include many internet-based and mobile banking services, along with more customised services. However, instead of the traditional personnel, customisation will be based on self-services supporting the customers in their decision making, e.g. in investments.\n", "role": "user" }, { "content": "Purpose: - The banking industry is definitely among the service industries that have been thoroughly transformed during the past decades. The direction of development has been toward more efficient mass services enabled by information and communication technologies (ICT). The purpose of this paper is to focus on analysing the changes in banking services, particularly the mass services offered through service factories. The aim is to review the previous literature on efficient production of banking services, and particularly analyse the effects of service factories in this area.\nDesign/methodology/approach: - The theoretical focus of the paper reviews the frameworks and models for service positioning and efficiency for recognising the typical banking services used in the analyses. The Service Process Matrix is analysed for its components and implications for recognising efficient service processes for different types of services.\nFindings: - The findings show that the Service Process Matrix provides a tool for categorising different services for their efficiency in the case industry analysed. The matrix is adapted to better reflect the changes in banking services.\nOriginality/value: - The quest for service efficiency is of importance, both in public services and service businesses alike. Several tools and models have been proposed to analyse the most appropriate way to produce services of different types. This paper analyses this area and provides insights for managers in how to position different service types for best customer value and efficient service processes. The paper also provides insights for service researchers looking for frameworks to categorise different service types.\n", "role": "assistant" } ]
Gamble more than you want? A study of casino servicescape, perceived control and unplanned gaming behaviors
[ "Perceived behavioral control", "Casino servicescape", "Unplanned gaming" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: The global casino gaming entertainment industry is large and expanding. According to PricewaterhouseCoopers (2015), it is worth US$182.8bn versus US$117.6bn five years ago. The Asia-Pacific region is currently the largest casino gaming market in the world, followed closely by the USA. Given that casino gaming is increasingly a major entertainment and leisure activity, a growing number of studies have investigated gaming in the past decade (Fong et al., 2017; Lam et al., 2011; Liu et al., 2014; Wong and Wu, 2013). Research on casino entertainment business typically centered on gamblers' motivations, preferences, satisfaction and revisit intentions (Lam et al., 2011; Wan, 2013; Yi and Busser, 2008). Prior research found that the service environment has a significant impact on customer satisfaction, behavioral intentions, and actual behaviors (Durna et al., 2015; Jalilvand et al., 2017; Prentice and Wong, 2016; Wong et al., 2012; Wong and Prentice, 2015). As such, modern casinos often spend a large amount of resources building conducive and comfortable service environments for their customers. Their aim is to encourage patrons to stay longer and gamble more in the casinos. In contrast, major public and private stakeholders are pressuring casinos to conform to socially responsible gaming policies to discourage excessive gambling (Henderson, 2012; Hing, 2001). Hence, a better understanding of how a casino's physical environment affects consumers' unplanned gaming behaviors would be useful to businesses and governments alike.\nLiterature review and hypotheses development: Perceived behavioral control and unplanned gaming behaviors\nMethod: Data collection\nResearch findings: Table I presents the means, standard deviations, and correlations between key variables. Consistent with our expectations, the casino servicescape is positively correlated with perceived internal control (r = 0.23, p < 0.01) and perceived external control (r = 0.16, p < 0.01). Perceived internal control (r = -0.28, p < 0.01) and perceived external control (r = -0.15, p < 0.01) are negatively correlated with unplanned gaming behaviors.\nDiscussion: Conclusions\n", "role": "user" }, { "content": "Purpose: This study aims to explore the influence of casino servicescape on unplanned gaming behaviors by examining the mediating role of casino customers' perceived behavioral control (i.e. perceived internal and external control).\nDesign/methodology/approach: Data were collected from 475 casino customers using street intercept survey. Multiple regression analysis was used to evaluate the mediating role of perceived behavioral control.\nFindings: Results show a negative indirect effect of casino servicescape evaluation on customers' unplanned gaming behaviors through perceived internal control. Moreover, casino customers' perceived external control does not mediate the relationship between casino servicescape evaluation and unplanned gaming behaviors.\nResearch limitations/implications: Findings suggest that a favorable casino servicescape may enhance customers' perception of control over their personal resources (e.g. skills and knowledge), which in turn facilitate self-regulation of their unplanned gaming behaviors. The findings are limited to Chinese casino customers, and researchers should further validate the results for non-Chinese casino patrons.\nPractical implications: To casino operators, results suggest that the Friedman-style casino design (i.e. compact gambling areas and low ceilings) potentially encourages unplanned gaming behaviors of casino customers. From social-policy perspective, governments can create effective responsible gaming programs by enhancing casino customers' perceived internal control.\nOriginality/value: This study illuminates the process that connects casino servicescape with unplanned gaming behavior by explicitly testing the mediating role of customers' perceived behavioral control. Moreover, it examines unplanned gaming behaviors of leisure gamblers who represent the mass gaming population.\n", "role": "assistant" } ]
ERP in the minds of supervisors: Joint roles of task interdependence and cultural norms
[ "Manufacturing resource planning", "Managers", "Culture", "United States of America", "China" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1 Introduction: Existing research into enterprise resource planning (ERP) adoption and implementation has suggested that national culture can have a significant impact on the types of difficulties encountered by managers with regard to the effectiveness of specific management tactics in such contexts (Soh et al., 2000). In these studies, comparisons between collectivistic cultures and individualistic cultures have been prominent. With the reported level of growth in the ERP market in China (Martinsons, 2004), any significant roles that collectivistic culture may play in the effective management of these technologies is of vital interest to the myriad of vendors, consultants and adopting firms poised to profit from this growth. Given the association of collectivistic cultures with an emphasis on shared objectives and encouraged exchanges of information (Earley, 1989), it is likely that intra-organizational communication capabilities of such systems would receive considerable attention in collectivistic settings.Other non-cultural issues can have a similar effect on management interests in these capabilities. For example, in settings involving increasingly sequential or reciprocal task interdependencies (Thompson, 1967), individual business units, sub-units and workers are increasingly subject to difficulties associated with breakdowns in information flow (Galbraith, 1977). Intra-organizational information-sharing mechanisms, such as those provided by ERP, help to augment such flow by enabling cross functional or interpersonal visibility of related and relevant information (Child and McGrath, 1991). Thus, managers confronted with increasingly interdependent settings may be more inclined to support technologies that provide such capabilities.Modern resource planning (RP) systems have popularly been characterized in terms of their transaction processing and data structure capabilities (Cotteleer and Bendoly, 2006; McAfee, 2002; Davenport, 2000). However, reviews of research and practitioner articles suggest that, regardless of their integrative features, few operating managers think of these systems as more than transactional and automated planning mechanisms (Jacobs and Bendoly, 2003). Few provide in-depth consideration or evaluation of these systems as tools for communication and collaboration. The burgeoning potential of the Chinese ERP market and the importance placed on intra-organizational communications in collectivistic cultures and/or in highly interdependent work settings suggest that such evaluations are overdue. Since, the recognition of capability benefits can strongly influence the motivation behind managerial support of technologies, an investigation into this line of logic, therefore, clearly presents a unique and relevant contribution to the management of technology literature.Our study takes an experimental approach to investigate managerial assessments of the usefulness of RP systems as tools for intra-organizational communication. We specifically consider the joint role that cultural norms and interdependence among internal operational activities have on these managerial views. In our design and analysis, we simultaneously include a range of controls accounting for prior managerial experiences in an attempt to disentangle various exogenous effects from those effects under investigation. The next section will describe the theoretical foundations of our research questions and will be followed by a detailed description of our experiment. We will then describe the analysis and overview of our results and discuss the implications for both future research and practice.\n2 Theoretical foundations: The role of interdependence at the operational level of a firm poses several implications with respect to information needs and means of exchange. To ground these considerations, we adapt a traditional interpretation of interdependence defined as the degree to which two or more activities interact to determine an outcome jointly (Thompson, 1967; McCann and Ferry, 1979; Dean and Snell, 1991). Highly interdependent environments thus represent settings in which regular exchange of information is necessary in safeguarding against operational conflicts.While information-sharing between interacting activities can be essential, Sorenson's (2003) recent study of vertical integration as a proxy for interdependence also suggests that highly interdependent environments create barriers to learning and effective exchange. This phenomenon is attributed largely to increased difficulty in pinpointing sources of error and constraints in these increasingly complex settings (March and Simon, 1958; Simon, 1962; Rivkin, 2000). As a result of these two issues acting in tandem (i.e. the greater need for information and greater barriers to exchange), the effectiveness of such complex, highly interdependent settings is extremely reliant on the existence of management and organization-wide efforts aimed at facilitating information-sharing (McCann and Ferry, 1979; Child and McGrath, 1991).Ultimately, one of the principal roles of managers is to \"mitigate problems caused by interdependent processes within firms\" (King, 1999). This mitigation can take the form of direct policy changes or assistance in the synchronization of delegated worker responsibilities (MacNeil, 2004; Galbraith, 1977). More specifically, Galbraith's (1977) information processing theory (IPT) suggests two mechanisms for supporting worker activities in complex, highly interdependent settings. One of these mechanisms involves the fostering of lateral relations, such as work teams, to augment interpersonal information exchanges. Alternately, information systems designed to facilitate information-sharing capabilities, in a perhaps more codified form, are thought to enable similar benefits (Bendoly and Jacobs, 2005; Bendoly et al., 2006). The view that specific forms of information technology can meet the information processing requirements of particularly interdependent contexts has been reiterated in a number of recent operational studies (Stock and Tatikonda, 2000; Tatikonda and Rosenthal, 2000; Flynn and Flynn, 1999). In fact, Flynn and Flynn (1999) as well as Morgan (1998) cite the replacement of isolated legacy systems by enterprise-wide, fully integrated transaction-processing systems as a recent instantiation of this need. The findings of Gattiker and Goodhue (2004) further suggest that the greater the extent of interdependence, the greater the potential, or at least perceived potential, for RP system utility. Yet while the mechanisms (e.g. increased communication) available to assist workers in these interdependent tasks have received some consideration in the literature, oddly, little attention has been given to the factors motivating managers to champion the use of these mechanisms in specific contexts (King, 1999; Therkelsen and Fiebich, 2003).2.1 Resource planning system capabilities\n3 Methodology: 3.1 Design and participants\n4 Data and analysis: 4.1 Manipulation checks and descriptives\n5 Discussion: These results provide support for the study hypotheses and suggest that the effects of task interdependence on perceptions of the importance of ERP communicative capabilities may be moderated by culture. In particular, in the US, the perceived importance of these capabilities may depend largely on task interdependence such that ERP systems are recognized as being more important for intra-organization communication in contexts characterized by higher levels of task interdependence. Conversely, in the PRC sample, there was relatively little effect of task interdependence on the importance attributed to ERP communication capabilities, where in both treatment conditions such capabilities were perceived to be of nearly equivalent importance.With reference to the existing research literature, these findings provide a unique contribution to understanding the significance of contextual, cultural and experiential influence on the kind of managerial support that is assumed to be so critical in technology adoption and diffusion among workers. It extends existing applications of IPT by showing how informational needs posed by such theory in specific (i.e. collectivistic or highly task-interdependent) contexts can be used to suggest influences on managerial views. The findings thus provide a potentially powerful link between IPT and alternate theoretical frameworks involving managerial support as antecedents of successful technology application (Eder and Igbaria, 2001; Zmud, 1984).Our results also have interesting implications for the burgeoning ERP market in China, not to mention other national settings in which high levels of collectivism are the norm. If managers in China (as compared to their US counterparts) can be characterized as highly enthusiastic of the communication capabilities provided by ERP systems regardless of the extent to which internal processes are interdependent, then the business cases that support ERP adoption and extension should be expected to emphasize the benefits of such capabilities. In particular, this emphasis should be much greater and more clearly delineated than those developed in the USA on average. With the myriad of bolt-ons now available to ERP architectures, one would imagine choices related to bolt-on adoption to be similarly biased. Ultimately, the prototypical characteristics of extended ERP architectures may become so distinct that they give rise to competitive opportunities (Bendoly and Jacobs, 2005) out of reach of engrained US socio-technological architectural designs.In light of the potential significance of these findings it is relevant to scrutinize the potential limitations of the current work. In the design of this study, several precautions were taken to limit the role of exogenous factors in the analysis. Prior experience with RP systems, interdependent settings and existing views of the usefulness of these systems were included to help account for potentially significant sources of variation. The general experimental design in which respondents are exposed to one of two conditions (high vs low interdependence cases) provided a critical level of control for delineating the effects studied, reducing the risk of halo effects associated with common source bias. However, regardless of the precautions taken in this study, the results and phenomena suggested should be carefully interpreted. Although the effectiveness of our experimental manipulation was validated statistically (Cook and Campbell, 1979), the contexts considered in these treatments may not provide meaningful comparisons that can be extended to certain alternate manufacturing and service settings. Furthermore, the choice of the manufacturing settings used in our treatment cases may have created unanticipated biases in interpretation that subsequently influenced the results.An added limitation may come from our specific operationalization of concepts in this study. For example, while we base our measure of interdependence (used in our manipulation checks) on the work of Van der Vegt and Janssen (2003), alternate scales (McCann and Ferry, 1979; Wybo and Goodhue, 1995) may have yielded somewhat modified results. While our results do not suggest that this was an issue in distinguishing our treatment contexts, further studies might be strengthened by the incorporation of multiple views of such widely applied concepts.Another possible limitation has to do with the selectivity of the sample studied. Convenience sampling has been viewed as an acceptable approach in recent OM studies (Hausman et al., 2002; Schultz et al., 2003) due in part to the benefits of increased internal validity and control from such selection (Hoyle et al., 2002). These added benefits were particularly relevant to this study since the timing and control of the experimental treatments was necessary to ensure the interpretability of the results. Nevertheless, it should be recognized that such an approach may imply sample biases that limit the robust application of results -, i.e. much in the same way that the initial selection of many survey target-populations limit interpretation. Future work with alternative samples should be conducted to further validate or refine the results described here.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to test the existence of direct and moderating effects of task interdependence and culture on supervisory views of enterprise resource planning (ERP) systems.\nDesign/methodology/approach: - To analyze these effects, an experiment was conducted in the USA and The People's Republic of China. A total of 304 management supervisors participated. Participants were exposed to an interdependence manipulation and then rated the importance of ERP in the case contexts described by experimental treatments.\nFindings: - Results support the moderating effects of culture on the extent to which task interdependence impacts managerial views of the communicative capabilities of ERP systems. Task interdependence effects are much less severe among the views of Chinese managers.\nResearch limitations/implications: - Main limitations potentially stem from our specific operationalizations of the factors studied as well as selectivity of the subject pool. As with many empirical comparisons of culture, these limitations may confine the application of the findings to the two national contexts studied.\nPractical implications: - If managers in China (as compared to their US counterparts) are more enthusiastic of the communication capabilities provided by ERP systems regardless of the extent to which internal processes are interdependent, then the business cases that support ERP adoption and extension should be expected to emphasize the benefits of such capabilities. This may foster a strategic distinction in the use of these architectures in the two settings.\nOriginality/value: - This study specifically examines the interactive effects of task interdependency and culture on managerial perceptions regarding ERP communicative capabilities.\n", "role": "assistant" } ]
Continuance interaction intention in retailing: Relations between customer values, satisfaction, loyalty, and identification
[ "Survey", "Identification", "Loyalty", "Customer satisfaction", "Reputation", "Identity", "Customer relationship management (CRM)", "Online shopping", "Customer values", "Continuance interaction intention", "Retailing sector" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Continuance interaction intention is essential to the future of social media services with rapid expansion and rigorous competitions between its suppliers. Social media help companies to broaden customer reach and interact with them more proficiently (Duffett, 2015; Whiting and Williams, 2013; Yin et al., 2015). Online micro-blogging platforms such as Facebook and Twitter have increased in popularity since five years ago (Cuomo et al., 2017; Hajli et al., 2015). The primary goal of micro-blogging platforms as an application or tool is to create content compelling enough for users to connect, communicate, share information, experience and interact with others (Cuomo et al., 2016, 2017; Weinberg, 2009). Organizations have declared that customers are now more value conscious (Leroi-Werelds et al., 2014). Many organizations set up brand or company pages (also called brand communities or fan pages) on micro-blogging platforms to connect with their consumers and get closer to them through continued interactions (Hutter et al., 2013). The micro-blogging platform is one of the highest outstanding digital tools to create consumer awareness and experience (Cuomo et al., 2016, 2017; Edison Research, 2012; Lipizzi et al., 2016; SynForce, 2013). Consumers with a high level of continued interaction on the platform receive more marketing information regarding the brand, which creates more profound satisfaction, loyalty and identification toward the brand (Trainor, 2012). Satisfaction, loyalty and identification signify a collective experience and feeling created by various interactions with the brand (Ageeva et al., 2018; Bhattacherjee, 2001; Foroudi, 2016, 2019; Gao et al., 2014; Melewar et al., 2017). However, there are a limited number of studies that have examined the effect of satisfaction, loyalty and identification on continuance interaction intention.\n2. Theoretical background: The crucial role, as amply demonstrated in many scientific studies, played by innovation as a lever to determine the growth of enterprises in a highly competitive environment is indisputable (Ngo and O'Cass, 2013). In this regard, the world of retailing is perfectly aware of the great opportunities that technological innovation can offer in improving the relationship with markets (Foroudi et al., 2016). Since this phenomenon creates both challenges and marketing chances for companies, the real change goes beyond merely introducing or making use of technological innovation (Zuo et al., 2019). However, nowadays, retailers think digitally, thus activating a process of cultural evolution and upgrading of skills that involves the corporation as a whole (Bessant et al., 2010).\n3. Methodology: 3.1 Sample and procedure\n4. Data analysis: Descriptive statistics for the entire research sample was examined using Statistical Package for Social Science. Exploratory factor analysis (EFA) as an essential method and coefficient a was carried out in the initial stages of the current study for scale validity to reduce the numbers of observed study indicators (Hair et al., 2006). To examine the causal association between the research constructs and to determine the quality of the measurement model, structural equation modeling (SEM) using Analysis of Moment Structures was performed. Also, SEM is a reliable tool and technique for modeling, and it is the most critical method for theory examining in marketing (Fornell and Larcker, 1981; Steenkamp and Baumgartner, 2000). The descriptive statistics and correlation matrix for the items are illustrated in Table II.\n5. Discussion of the research findings: This study examines how customer values (functional, social and experimental) enhance satisfaction, loyalty and identification, and how such relationships, in turn, impact users' continuance interaction intention. SET (Blau, 1964; Cropanzano and Mitchell, 2005; Homans, 1961) and customer value theory (Chen and Dubinsky, 2003; Shi et al., 2016) classifies consumers' value of companies into three categories: functional value, social value and experiential value (Dennis et al., 2014; Shi et al., 2016). According to Dennis et al. (2014) and Shi et al. (2016), although the literature pays much attention to consumer value, it has received little attention to date. To fill this gap and to use the theory of social exchange and customer value, our research pushes beyond the current boundaries to relationship consumer value, satisfaction, loyalty, identification and continuance interaction intention concepts:Functional value describes a large portion of social media service user experiential value and satisfaction within three dimensions (information quality, product-related learning, and economic benefit). Our interviewees, confirm the relationships When we discuss about our functional value, I assume you mean the type of values, offers, services which we provide to our customers. As a brand manager, our main focus is to give a right services and products to our clients, we help them to promote their company or brand well to market, and it is quite challengeable these days and we should compete with our competitors. Social media is the cheapest tool to introduce their product, interact with the right target market, and increase sales and performance.(Brand Manager 39 years old)We all use social media daily, or even every hour, or every minutes of our lives are related to our online community, we are in a digital era, we share information, interact with all my friends, colleagues, customers around the world. Interestingly, we receive fast reply, is not it amazing?(Communication Consultant 27 years old)The results show that functional value has no significant link to loyalty. Also, one of interview mentioned that \"even we have a great relationships with our customers, but sometimes, they visit different retailers [...] based on their purchases, we send promotions and offers to them but I don't really believe the word of loyalty these days [...] I think social media make lives tough [...] we always analysis other retailers and we have to be updated.(Retail Manager 34 years old)\n6. Implications, limitations and future research: The current study is one of the very few emerging research studies to have examined the relationships between consumers'/visitors'/shoppers' functional values, social values and experimental values, and to have further explored the relationships between consumers'/visitors'/shoppers' value on satisfaction, loyalty, and identification, and how such relationships, in turn, impacts their continuance interaction intention on brand pages in social media. The sample, consisting of 390 valid responses, was carried out on consumers of high-end retailers of worldwide brands. Data were analyzed employing SEM.\n7. Conclusion: The results of the current research illustrates that continuance interaction intention depends mostly on user satisfaction, loyalty and identification which are driven by functional and social values. The present study offers theoretical and managerial contributions to marketing and innovation management for high-end retail setting by concentrating on social media on consumer-computer interaction. The analytical findings indicate that functional value has a positive influence on user's experiential value, in turn leading to satisfaction compared to rival brands. Previous research works have studied those links between consumer values and continuance interaction intention, but they have ignored the main constructs which are involved in their relationships. Unexpectedly, based on the respondents' perception toward a brand's social media, there was no impact of functional benefits on loyalty. Individually, regarding social value, factors such as interaction, collaboration and social presence exert the most substantial influence on satisfaction, followed by loyalty. In addition, our results reveal that identification positively influences continuance interaction intention. In sum, our findings indicate that, overall, consumers'/visitors'/shoppers' continued interaction intention is meaningfully driven by social and functional values.\n", "role": "user" }, { "content": "Purpose: Social media as a competitive marketing tool deliver online platforms for retailers to get closer to their consumers/visitors/shoppers through continued interaction. The purpose of this paper is to scrutinize how customer values (functional, social and experimental) enhance satisfaction, loyalty and identification, and how such relationships, in turn, impact users' continuance interaction intention.\nDesign/methodology/approach: A mixed-method approach was designed to identify the consumers' perception toward high-end retailers of worldwide brands. In all, 12 interviews with experts in retailing and a survey among 390 respondents were conducted. Structural equation modeling/AMOS was employed to gain insight into the various relationships and influences.\nFindings: To augment users' continuance interaction intention, the results indicate that retail managers should focus more on customer-based values when they design marketing strategies for brand pages on social media. The findings also provide guidelines for retail marketing and social media managers to generate consumer value in the retail environment via information quality, product-related learning and economic benefits (functional value); interaction, collaboration and social presence (social value); and experiential value (intellectual and effective value).\nOriginality/value: The paper offers critical managerial contributions by presenting a comprehensive picture of the condition in which a favorable brand social media page could be constructed within a brand to satisfy consumer value and achieve satisfaction, loyalty, identification and continuance interaction intention, all of which are critical objectives for every company. In other words, a clear knowledge of the dimensions of consumer value concepts can assist retail communication managers to improve consumers'/visitors'/shoppers' intention to continue their interaction in a competitive market. The current study is one of the very few emerging research studies to have examined the relationships between consumers'/visitors'/shoppers' functional values, social values and experimental values empirically, and to have further explored the relationships between the research constructs.\n", "role": "assistant" } ]
Does gym use impact upon nutritional knowledge?
[ "Nutrition", "Labelling", "Food products", "Gymnasia", "United Kingdom" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Over the last two decades of the twentieth century, there has been a dramatic increase in the general publics' interest and their knowledge of nutritional issues (Shine et al., 1997a), however consumers' ability to interpret this knowledge whilst shopping for food is still quite limited (National Consumer Council, 2003). Nutrition labels offer an important tool that can help consumers to improve their quality of life by making better food choices. The Institute of Grocery Distribution (IGD, 2003) found that consumers want clearer food labelling and healthier food choices to make it easier for them to follow a healthy diet. Labels may be the only source of nutritional information available to the consumer at the point of purchase, therefore it is important the information presented is in clear, easy to understand format.In 1990, due to the escalating public health problems and growing public interest in the relationship between diet and health, the European Commission passed a new Directive (90/493/EEC) regarding nutrition labelling, which defines it as \"any information appearing on a label relating to energy and the following nutrients: protein, carbohydrate (CHO), fat, fibre, sodium, vitamins and minerals\". However, directives (compared to regulations) are not binding, meaning that each Member State has the flexibility to implement a directive accordingly as long as the result is achieved and the spirit of the directive is met (Hurt, 2002).In the UK, the provisions on nutrition labelling (Official Journal of the European Communities, 1990) are covered by the Food Labelling Regulations advisory Guidance Notes to Nutrition Labelling by the Ministry of Agriculture, Fisheries and Food (MAFF, 1996). Nutrition labelling is currently voluntary, unless a product contains a nutrition claim such as \"low fat\" or \"high in fibre\". However, the government recommends that Group 2 information (also known as Big 4+Little 4) is given on all foods (MAFF, 1999). In addition, the European Commission (2007) regulation (EC) No. 1924/2006 \"Nutrition and Health Claims made on foods\" has recently been introduced which provides new regulations on nutrition claims.The Food Standards Agency (FSA, 2005a) investigated the most easily understood labelling scheme format, as it is likely that nutrition labels will become compulsory in the future due to the increased demand from consumers for clearer labels. In 2007, it launched the Traffic Light System (FSA, 2007a). However, at the time of writing this paper a number of labelling schemes are being used in the UK by the food industry and retailers and the FSA (2007b) is currently conducting research on the most effective format with comparisons being made to the IGD's Guideline Daily Amount (GDA's) system amongst others.Members of the general public are often aware that overall wellbeing can be achieved through a balanced diet combined with regular exercise. The UK gym industry is experiencing a boom, worth more than PS1.6 billion a year (BBC, 2005), however, obesity in the UK and other European countries is on the increase (Europa, 2005).Regular exercise or physical activity is not only a great tool for controlling body weight but it also makes the heart stronger and can help prevent cardiovascular disease (Rodrigo et al., 1999) and colon cancer (World Cancer Research Fund, 2007). It is estimated that one third of all coronary heart disease cases and a quarter of strokes could be prevented with appropriate exercise regime (Adab and Macfarlane, 1998). The current recommendations of the Department of Health (2004) state that both adults and young people should perform at least 30 minutes of moderately intense physical activity on most, if not all, days of the week.Numerous studies have been published describing consumers' use and understanding of nutrition labels (Shine et al., 1997a; Wandel, 1999; Byrd-Bredbenner et al., 2000; Nagya, 2000; Higginson et al., 2002a, b); however we were able to find only limited number of studies comparing the differences in nutritional knowledge between physically active and non-active subjects (Barr, 1987). For the purpose of this study, gym users were identified as a group of particular interest as they were assumed to have a greater interest in health, wellbeing and perhaps weight loss. This in turn could be associated with a greater use of food-labelling information and possibly a higher level of nutritional knowledge. Therefore, this study aimed to explore if differences in gym and non-gym users and genders exist with regards to the use and understanding of nutrition labels.\nMethods: Questionnaire development\nResults: Sample characteristics\nDiscussion: Food labels are viewed as an important source of information to the consumer as it is the only point-of-purchase information available to them, as direct contact with over the counter personnel in supermarkets is quite low (Wandel and Bugge, 1997). As previous studies Eves et al. (1994) and Byrd-Bredbenner et al. (2000) have also identified, the great majority of subjects reported to read nutrition labels. Gender was found to play a role in nutrition label reading, with women more likely to read labels than men, similar to previous research findings (Shine et al., 1997b; FSA, 2001b). One possible explanation for this factor is that women are still responsible for all or most household shopping (80 per cent) compared to 20 per cent of men (FSA, 2001a). However, the FSA study was conducted across the whole of the UK and had a greater diversity of age groups. This study was conducted only in the Greater London area with more than half of the subjects falling into the 25-34 age group and used a convenience sampling frame. The results may therefore be biased as many people of this age, especially in London, tend to be unmarried or single, non-gender specific, and therefore doing their own shopping.There was no statistically significant difference in the rate of label reading between gym and non-gym users. Although gym users were hypothesised to be more knowledgeable about nutritional issues and to be more concerned about their health, subjects who answered that they were non-gym users, could be undertaking different forms of physical activity and therefore be equally concerned about their health and be aware of the connection between regular exercise, adequate nutrition and health. Only a minority of subjects indicated that they never read nutrition labels, mainly because they are not interested, do not have the time or find them misleading.Just over half of the respondents who read nutrition labels found them easy or fairly easy to understand, with a further 30 per cent answering neither easy nor difficult. Similar findings have been reported by other researchers (Higgins et al., 2002a), however there is little knowledge available about how consumers actually process the information obtained from labels. Shine et al. (1997b) reported that \"consumers are very much aware of the concept of nutrition labelling but often find it difficult to use the information obtained\". They also concluded that nutrition information must be presented in a format that can be easily understood by the consumer. This is especially important nowadays when people have better awareness of nutrition and nutrition labels but do not have the time to read the whole label, as was given as the third most common reason for not reading food labels in this survey. Labels should therefore be well arranged and easy and quick to evaluate. Lobstein and Davies (2008) have recently reiterated these points and added that a uniform approach to nutrient profiling is needed, not only to help consumers, but also manufacturers and retailers.Total and saturated fat and total energy content were the most frequently viewed items on nutrition labels. These findings mirror the studies by MAFF (1995), Wandel and Bugge (1997) and more recently Higginson et al. (2002b). Sugar content also attracted attention among a substantial proportion of respondents, whereas only a minor proportion was interested in the amount of protein. Females were significantly more likely to read the total energy content on labels, whilst males and gym users were more likely to search for protein information. Possible explanation for males and gym users viewing the protein information significantly more often could be due to the general high interest in muscular gain, rather than weight loss amongst these groups or perhaps it could be due to the recent high interest and publicity of high-protein diets, which are especially popular amongst men (Bravata et al., 2003).Those who currently want to reduce their fat intake significantly more often answered that they always search the labels for total energy and total and saturated fats information. A possible reason for this difference is that they search this information in order to avoid the consumption of these nutrients. This is in line with the findings of Shine et al. (1997b) who reported that a majority of subjects indicated they search for information on nutrients they wish to avoid, with fat being the most common item. The high interest in saturated fat information may be due to the recent recommendations for consumers to reduce the amount of this type of fat consumed in order to decrease cardiovascular disease occurrence (FSA, 2005b).Method of viewing information on nutrition labels\nConclusion: The results of this study suggest that a majority of consumers tend to read nutrition labels, with significantly higher reading rates amongst females. Although reading is high, it does not necessarily mean consumers can translate the information into healthy eating. Indeed, the overall low knowledge of calorie content of macronutrients and fat overall indicates consumers need to be further educated or provided with more information in these issues if they are to make informed healthier food choices, be it whilst shopping for food or when eating in a food outlet. On the whole, the hypothesis that:H1. Gym users have better nutritional knowledge than non-gym users was not confirmed, although there were some minor differences in the interest/knowledge of protein between male gym users and non-users.Further research is needed regarding the actual ability of consumers in interpreting nutrition labels and to assess possible differences in use and understanding between physically active and non-active people.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to investigate gym and non-gym users' use and understanding of nutrition labels.\nDesign/methodology/approach: - A consumer survey in the form of a questionnaire conducted in the Greater London area in February/March 2005. Subject recruitment process took place in both a gym and university setting. Frequency tables and kh2-test are used to assess relationships between variables (p=0.05).\nFindings: - The resulting sample consisted of 187 subjects, with predominance of females and gym users. Of the subjects, 88 per cent reported to at least occasionally read nutrition labels, with higher reading rates amongst women, irrespective of gym user status. Total and saturated fats are the most often information viewed on labels, however the overall knowledge of the calorie content of fat is low, with 53 per cent of subjects responding saturated fat contains more calories per gram when compared with other types of fats. This paper does not find significant differences in the use and understanding of nutrition labels between gym and non-gym users, but highlights the publics' continued lack of understanding of nutrition labels.\nOriginality/value: - This paper is unique as it investigates whether there is any difference between gym/non-gym users' use and interpretation of use of nutrition labels. It finds gender impacted more on nutritional labels knowledge than gym user's status. This points to a gender issue and questions the quality of information available to the general public. This paper is valuable as it highlights and identifies an area that requires further research and assessment, and is therefore useful to key stakeholders responsible for public health nutrition.\n", "role": "assistant" } ]
Playing with pirates: how companies can win the endgame
[ "Behavioural economics", "Preferences", "Intrinsic motivation", "Piracy", "Cultural goods", "Manga" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Does it make sense for a best-selling author to pirate his own books? Indeed, piracy is frequently presented as a harmful activity for original producers, and this outcome is likely in many situations. To account for otherwise surprisingly less-than-vigorous enforcement in some domains, the conventional rationale cites budget constraints as a result of low infraction costs on the part of pirates and high detection and enforcement costs on the part of genuine producers. Nevertheless, we argue that eliminating piracy is not always desirable even at zero enforcement cost, because genuine producers can in some plausible circumstances strategically use (and even encourage) piracy to increase their profits at levels that will be difficult to reach without piracy. Several mechanisms explain how piracy can be profitable to the pirated firm in the context of digital products. For example, the network effect implies that the value of a given technology increases when more people use it. Usual examples are telephone or online social networks. Consequently, to quickly reach a critical mass of users and a dominant position, it makes sense for genuine producers to tolerate and even encourage piracy.\n2. Pirates as intrinsically motivated expanders of markets: It is a common assumption in standard economic theory to consider preferences as given and exogenous. In line with this rationale, if credible information is available without costs, people will make informed choices reflecting their preferences. Nevertheless, there is considerable evidence that preferences can be shaped by different means that are usually costly, e.g. education and publicity. Rather than just allowing people to exhibit their given preferences, some pirates frequently work hard to change individuals' preferences in a sense that can ultimately offer unexpected market opportunities for pirated firms. Indeed, firms that market cultural goods frequently launch innovative products and services that may not immediately fit consumer expectations. Sometimes, the objective function of pirates incorporates elements that are not in the objective function of the legitimate producers, putting them in a better position to shape preferences.\n3. Pirates as providers of strategic marketing information: Getting information on consumer tastes to tailor the product and services offered is frequently considered a necessary but costly activity for firms. Exploiting piracy data cleverly can allow legitimate producers to acquire valuable information on consumers' expectations at lower cost and can lead to cost savings. Simply put, if the net value of gathered information is higher than net losses generated by sales substitution and other detrimental piracy effects, legitimate producers can find piracy more profitable than a situation without piracy. In the case of music piracy, information on downloaders such as their place of residence can help to better organize a performance tour or the sales of derivative products. The Radiohead release of their album under a pay-what-you-want option on the Web as a piracy substitute provided information which was useful to better organize the tour, given that 1.2 million fans were expected to attend the tour.\n4. Policy implications and conclusion: Two principal objections to \"beneficial piracy\" arguments are frequently advanced. First, if piracy is beneficial, then beneficial effects could be replicated by the genuine producer. As they do not do this, then piracy is not generating benefits unless we assume they are irrational or misinformed. Second, if piracy is beneficial, it remains unexplained why genuine producers spend non-trivial amounts on detecting and bringing suit against pirates.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to expose two mechanisms by which pirated firms that market cultural goods can strategically use piracy to increase their profits.\nDesign/methodology/approach: The authors interpret several world examples through the lens of behavioral economics to identify mechanisms that can make pirates' initiatives profitable for the intellectual property rights holders. They also address two principal objections to \"beneficial piracy\".\nFindings: First, intrinsically motivated pirates can be in a better position to shape consumers' preferences in a sense favorable to the firm profit. Second, pirates can generate strategic information that can help original producers to increase their profits.\nResearch limitations/implications: Fighting piracy can be perfectly justified from a legal viewpoint while constituting simultaneously a bad decision for a business viewpoint. Moreover, some pirates follow ethical rules that can lead to a symbiotic relationship with pirated businesses. Nevertheless, the generalization of our analysis must be undertaken with caution given that our arguments have been developed out of observations of particular institutional settings.\nPractical implications: Distinguishing \"good pirates\" from \"bad ones\" can lead managers to reconsider the systematic disapproval of copyright infringement in favor of more nuanced approaches.\nSocial implications: Piracy can be useful under some circumstances for the pirated firm and even for the whole society, by increasing access to cultural goods.\nOriginality/value: The authors identify two mechanisms that can make piracy profitable for pirated firms. These insights can help managers to avoid a \"one-size-fits-all\" policy regarding piracy and to seek how to create conditions for a mutual and shared success.\n", "role": "assistant" } ]
Improving trust, improving schools: Findings from a social network analysis of 43 primary schools in England
[ "Social capital", "Leadership", "Networks", "Professional capital", "Knowledge mobilization", "Decisional capital" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Objectives: The use of research evidence by teachers is currently experiencing global policy and practice interest, with many governmental agencies touting the importance of research-informed efforts at improvement. As school systems worldwide embark and expand on serious structural and instructional changes, the result is that school leaders, teachers, and educators in general, must now demonstrate the \"proof\" behind their teaching and learning strategies as well as provide evidence prior to undertaking interventions. As a consequence, the use of research and evidence is being positioned as vital to providing validity to practice Brown (2015). However, despite this growing press for the use of research evidence there has been a failure of research to make sustained impact on the practices of teachers or even enter the school building (Bryk et al., 2011; Taylor, 2013). Likewise, despite considerable activity, the development of processes for the effective connection and application of research evidence to practice remains underdeveloped (Slavin, 2008; Gough et al., 2011). Given the importance of the issue, the lack of progress in the area, and the resurgence of interest in research evidence, this paper seeks to add insight to the fields of research/evidence use (RE Use) and school improvement, by examining what drives teachers' perceptions that their school: first, encourages the use of research evidence to support the improvement of teaching practice; and second, whether teachers perceive that the school improvement strategies of their school are grounded in research and evidence on effective practice.\n2. Perspectives: 2.1. The \"self-improving school-led system\" in England\n3. Data sources: In response to the current policy environment and to encourage schools to \"self-improve\" through the use of research, early in 2014, the EEF (a grant-making charity whose work centers on identifying and funding promising educational innovations that address the needs of disadvantaged children in primary and secondary schools in England)[2] opened a competition for funding for projects that would focus on increasing the use of research in schools. Specifically, the EEF sought to fund pilot projects designed to increase understanding in relation to how schools can be supported in applying existing research findings to improve outcomes and narrow the gap in pupil outcomes. To address the EEF's aims the first author of this paper proposed a project (\"RLCs\") that would focus on addressing the aspects of OL described in Section 2.2. In addition, the project sought to use an innovative approach to circulating knowledge within schools through social network analysis that would be employed to find potential central actors within schools who were in favor of the idea of research-informed practice, and employing these people to act as research \"champions.\"\n4. Methods: This study, part of the larger RLC project, aims to enhance our understanding of teachers' perceptions of their school's climate in relation to using research/evidence, and key factors that may be associated with their perceptions. We employ a combination of analytic methods to test our hypotheses about the relationship between OL, professional relationships, trust, and climates of research/evidence-informed practice. Our survey instrument was used to gather data that measure OL, trust, and RE Use climate from 828 teachers of 43 schools. Social network data and analysis are used to quantify teachers' professional relationships in relation to expertise-seeking behaviors. We then use hierarchical linear regressions to explore the multilevel relationship between study variables in order to take into account the nested nature of our data (Raudenbush et al., 2004). Finally, we provide social network diagrams of professional relationships using social network software as a way to visualize the pattern of relationships between teachers and among study variables.\n5. Results: 5.1. Descriptives and correlations\n6. Significance: The RLC project, and its incumbent use of social network analysis to examine who within schools might be best suited to drive research/evidence informed self-improvement is the first of its kind in England. But the study also provides vital and new perspectives on the role of trust in achieving self-improvement. Based on our theory of action for the analysis (as espoused in our hypotheses) we argue that our results illustrate the importance of learning and trust in facilitating the types of relations needed to provide teachers with access to the research/evidence centered social capital that resides within a school. The work expands our notions of what is necessary to support the use of research/evidence in schools by placing more relational elements of the improvement equation front and center (i.e. it builds on the work of the work of Daly, 2010; Daly and Finnigan, 2011; Liou et al., 2015, by directing it at research use).\n", "role": "user" }, { "content": "Purpose: Many governments worldwide are now promoting the importance of research-informed efforts at improvement. At the same time research is yet to make sustained impact on the practices of teachers. Given the importance of the issue and the lack of progress in this area, the purpose of this paper is to examine what drives teachers' perceptions that their school: first, encourages the use of research evidence to support improvements to teaching; and second, whether school improvement strategies are grounded in research on effective practice.\nDesign/methodology/approach: Reviewing extant literature, the authors hypothesize that teachers' perceptions of research use are related to their perceptions of: the presence of in-school organizational learning (OL) factors; whether they work in high-trust environments; and also to the frequency and quality of their \"expertise-seeking\" interactions. Using a survey instrument to measure OL, trust, and school research use climate, the authors gather data from 828 teachers in 43 schools. The authors then use social network analysis to quantify teachers' professional relationships; with hierarchical linear regression employed to explore multilevel relationships between variables. Social network diagrams are also used to visualize the patterns of relationships between teachers/study variables.\nFindings: The analysis indicates that teachers who report the climate of their schools to be focussed on learning, experimentation, and valuing new ideas, tend to also report more use of research/evidence. Likewise, teachers who had more frequent and useful interactions around teaching and learning also report more research/evidence use (RE Use) in their schools. Finally, and perhaps most powerfully, is that higher levels of perceived trust in the school are also associated with reporting higher levels of RE Use.\nOriginality/value: The results illustrate the importance of learning and trust in facilitating the types of relations needed to provide teachers with access to the research/evidence centered social capital that resides within a school. The work expands the notions of what is necessary to support the use of research/evidence in schools by placing more relational elements of the improvement equation front and center.\n", "role": "assistant" } ]
Clinical dashboards and their use in an adult mental health inpatient setting, a pilot study
[ "Clinical effectiveness", "Patient safety", "Psychiatry", "Quality improvement", "Patient experience" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Clinical dashboards are a method of visually displaying aspects of performance data relevant to patient care. Clinical dashboards have been widely used in general health services, since Lord Darzi's 2008 Next Stage Review Report and have been a helpful method of providing real time data on clinical performance. Despite their presence in physical health care settings, there has been limited application within mental health care services. In this brief paper we discuss their background, content, our own experience and the benefits of their use within a mental health setting. We articulate our opinions based on our experience and upon a questionnaire delivered to patients and clinicians working on two rehabilitation wards in the East of England.\nBackground: As noted, the use of clinical dashboards in health care settings began with the Next Stage Review Report where the Department of Health (DoH) (2008) reported on the underuse of dashboards and advocated their use within all clinical areas. The National Health Service (NHS) Connecting for Health (2012) initiative further defined a clinical dashboard as \"[...] a toolset developed to provide clinicians easy access to the wealth of NHS data that is being captured locally in a visual and usable format, whenever they need it [...]\" Following the publication of the report of the Mid Staffordshire NHS Foundation Trust Public Inquiry (Francis, 2013) there is has been a drive to have useable clinical information at ward and board level. The aim is to ensure that care, management decisions and board level activity have patients at the centre, and in order to achieve this having the right information is crucial. Subsequent reviews of organisations by Sir Bruce Keogh have further highlighted the significance of having accurate clinical information along key outcome lines, so as these can contribute to clinical improvement and planning.\nClinical dashboards: Within the NHS, there is a requirement for innovation, high quality data and clinical engagement. The emphasis therefore ought to be on quality and ensuring health care services make the most of limited resources. The dashboards also appear helpful in relation to service line management by allowing front line clinicians to lead using an operational overview of clinical performance. High quality data remains critical to drive clinical performance within teams, ensuring that safe and effective treatment is given to patients in the most appropriate manner.\nMethodology: Following internal discussion on their relative merits at our clinical business and ward meetings, we decided to pilot these on two of our wards (an open rehabilitation and a locked rehabilitation ward). A literature review was conducted prior to collecting any data, with the results summarised above. The two wards included a population of 30 patients, mostly detained under the Mental Health Act. The decision to commence the pilot was made with the understanding of senior management and support from the ward manager and clinicians within the ward.\nSample: Staff survey: this included 25 members of staff across two psychiatric rehabilitation wards.\nDesign: After displaying the clinical dashboards on the ward corridors, we provided an anoymised questionnaire to gather feedback from patients and clinicians including: nurses, health care support workers, occupational therapists, doctors, pharmacists and psychologists. The questionnaire comprised of the following questions:\nResults: There were 23 staff respondents (out of 25) from a variety of different professional groups, including administrative staff and visiting professionals (Table II). The feedback on the questions are as follows (Figure 2).\nPatient feedback: Patients answered on the yes or no to the following questions (Table III).\nLimitations of the findings: The main limitations included the sample size. Clearly such a study is also limited by responder bias and a further recommendation would include examining the improvements over a longer period of time, for example in six months, after amendments had been made. The study utilised paper based, manually updated printed dashboards, but it is envisaged that in the future these could be made into an electronic version. Furthermore, it is recognised that staff appeared to find the dashboards helpful, however, we did not capture why. This would have been useful in understanding the benefits to clinical practice.\nDiscussion: It is our impression that clinical dashboards represent an efficient way of collecting and summarising a variety of clinically useful data in mental health settings. In this pilot the clinical dashboards were viewed as being useful tools for clinicians, supporting engagement. They can offer rapid access to large volumes of clinically useful information, in a palatable format. The pilot suggested that they could be presented in different ways to make them easier to engage with however they could also result in more paperwork for clinicians. In our pilot there remain opportunities to develop them and they are adaptable to this. Examples of potential modifications include opportunities to display patient feedback and staffing information. Despite this, the dashboards were seen to offer better data quality utilising locally collected information, limit data cleansing issues and delayed lag time, allowing teams to act upon an observed issue or problem.\nConclusion: Whilst offering a breakdown of the necessary parameters of quality: clinical dashboards provide an opportunity to customise data and obtain both cross-sectional and emerging trends: which can trigger a review of practice locally by clinicians. Patients appear to view these as a helpful intervention, increasing confidence in services and allow a greater opportunity for involvement. The benefits are both on safe and effective delivery of interventions within services together with the ability to promote performance and clinical engagement amongst front line staff - an absolute requirement in these dynamic times.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to explore the experiences and opinions of mental health professionals working in two rehabilitation wards to a clinical dashboard system.\nDesign/methodology/approach: Following the creation of the clinical dashboards, a questionnaire was developed and sent to staff and patients across two clinical wards involved in the clinical dashboard mental health pilot.\nFindings: The clinical dashboards were viewed as being useful tools for clinicians, supporting engagement. They can offer rapid access to large volumes of clinically useful information, in a palatable format. The pilot suggested that they could be presented in different ways to make them easier to engage with however they could also result in more paperwork for clinicians.\nResearch limitations/implications: The main limitations included the sample size, responder bias and the limited sampling period. It would have been helpful to have obtained further responses to understand why individuals came to their conclusions.\nPractical implications: The development and use of clinical dashboards in a psychiatric rehabilitation setting offered the opportunity to improve quality, collect and respond to relevant clinical data trends: which is regarded positively by staff and patients.\nOriginality/value: This study represents the first study to examine the use of clinical dashboards within a UK long stay adult mental health ward setting. The results suggest a positive response from both staff and patients and illustrates the potential benefits relating to clinical quality.\n", "role": "assistant" } ]
Using Google Trends and Baidu Index to analyze the impacts of disaster events on company stock prices
[ "Stock market", "AR-GARCH", "Crash incidents", "Search volume index" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Various studies have described the impacts of disaster events on stock markets. For example, Barrett and Davidson et al. studied the effects of aviation accidents on stock returns in 1987; Lamb (1995) and Angbazo and Narayanan (1996) examined the effects of hurricanes; and Shelor et al. (1992) studied earthquakes and market fluctuations in 1991 and 1992. In recent years, the related research is also abundant. Robinson and Bangwayo-Skeete study the economic and financial impact of hurricanes and tropical storms on small island developing states. Kowalewksi and Spiewanowski (2017) examined the relation of stock market to natural and man-made disasters and found that the response of stock market to natural disasters is stronger. Matthew et al. studied the effect of landfall hurricanes on stock returns and found that value stocks are more prone to disaster risk than growth stocks. According to Lee et al. (2018), the effect of natural disaster has spill-over effects. The result of the research showed the 2008 Sichuan Earthquake in China caused the most substantial contagion effect in the stock markets of neighboring Asian countries.\n2. Research background: 2.1 Effect of incidents on stock markets\n3. Public attention: Research and analysis of stock markets traditionally pay little attention to internet investors. Therefore, this work has not captured the relationships between investors' attention and markets in today's era of Big Data. Our current investigation focuses on open-source data made available through search engines to extract representative information and to establish indices that reflect investor attention to Malaysia Airlines' crashes.\n4. Establishing investor attention indices: To create investor attention indices, we first screen the voluminous search data for keywords related to Malaysia Airlines' accidents. We analyze the keywords by time sequence. From this, we create initial attention indices by adjusting time differences and weighting the indices. Once this is complete, we judge whether the indices could be modified and improved upon by using keywords that might be more descriptive.\n5. Empirical analysis: This empirical study examines how an incident affects stock prices by the way of investor attention, as illustrated by the example of the Malaysia Airlines Flight 370 crash. Given that stock price is affected by many factors such as economy, policy and so on, how to exclude the factors other than the incident is an important issue in empirical design. Some studies have found that there is a strong co-movement between stock market composite index and individual stocks (Kwon and Yang, 2008; Chi et al., 2016), and it means that these external factors simultaneously affect the whole stock market and individual stocks. Therefore, we use the Malaysian stock market composite index as a control variable to represent the impact of other common factors on stock prices. The first objective is to establish that the search index affects the stock price and assure the sign and timing of this effect by the Granger causality test and the VEC model. The second objective is to investigate correlations between the search index and volatility.\n6. Discussion and conclusion: This research utilized internet search data extracted from Google Trends and the Baidu Index to capture and measure public attention to the accident of the Malaysia Airlines aircraft. Combining IRF, AR-GARCH model and the search indices, we measured how such events affect company stock price and volatility.\n", "role": "user" }, { "content": "Purpose: With the ascendance of information technology, particularly through the internet, external information sources and their impacts can be readily transferred to influence the performance of financial markets within a short period of time. The purpose of this paper is to investigate how incidents affect stock prices and volatility using vector error correction and autoregressive-generalized auto regressive conditional Heteroskedasticity models, respectively.\nDesign/methodology/approach: To characterize the investors' responses to incidents, the authors introduce indices derived using search volumes from Google Trends and the Baidu Index.\nFindings: The empirical results indicate that an outbreak of disasters can increase volatility temporarily, and exert significant negative effects on stock prices in a relatively long time. In addition, indices derived from different search engines show differentiation, with the Google Trends search index mainly representing international investors and appearing more significant and persistent.\nOriginality/value: This study contributes to the existing literature by incorporating open-source data to analyze how catastrophic events affect financial markets and effect persistence.\n", "role": "assistant" } ]
Are post-Soviet republics ready for the new public management? The case of educational modernization in Kazakhstan
[ "Kazakhstan", "Higher education", "New public management", "Bologna Process", "Post-Soviet republics" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Higher education is widely viewed as a key factor in human development and an engine for the creation of a knowledge-based economy. In developed economies such as Australia, Canada, France, Germany, the UK and USA, higher education reforms during the last decades of twentieth century were strongly influenced by the new public management (NPM). Although many studies have addressed the impact of NPM on higher education (Hood, 1991; Ferlie et al., 1996; Schick, 1998; Allen, 1999; Sutch, 1999; Manning, 2001; Tolofari, 2005; Drechler, 2005; Leisyte and Kizniene, 2006; Keller, 2008; Sigman, 2008; Altbach, 2009; Lorenz, 2010; Sima, 2011; Baltasiu, 2012; Firsova, 2014), almost all of this of research is focussed on NPM in western countries. Of the small amount of work on NPM and higher education in developing countries, none has addressed challenges with respect to the post-Soviet states (former republics of the USSR).\n2. Public management reform in post-soviet states: NPM has been the most influential public administration reform paradigm of the last three decades (Pollitt and Bouckaert, 2004). Although NPM is an umbrella term for a complex and varied set of administrative transformations spanning many different jurisdictions and policy sectors, all NPM reforms share the goal of overcoming problems associated with traditional \"Weberian\" public bureaucracies (Aucoin, 1990). Concerns with the traditional bureaucratic model of public service delivery emerged in the west in the 1970s as politicians and the public increasingly lost faith in the capacity of government organizations to deliver services effectively and efficiently. There was a widespread sense that public bureaucracies were insulated from democratic control and unresponsive to citizens' needs (Pollitt and Bouckaert, 2004). NPM reforms in these countries included one or more of the following elements: reducing the number of procedural rules imposed on administrators so as to free them to determine the best ways to implement programs; transferring many day-to-day service delivery functions from central departments to new \"quasi autonomous\" entities; imposing greater accountability on public officials for achieving the \"results\" sought by the government; incorporating private sector practices such as risk management, customer service and quality assurance; transferring functions to the private sector via contracting out or privatization to take advantage of the alleged greater efficiency of private provision; and introducing competition in the delivery of public services, both within the public sector and between public and private sectors, to improve customer responsiveness (Aucoin, 1990; Dutil et al., 2010; Hood, 1991).\n3. Methodology: This paper presents a reflective case study based on a combination of direct practitioner experience in the sector and empirical research conducted over an extended period by the first author in the process of completing her PhD dissertation on the state regulation of business education in Kazakhstan. The reflective case study has become a common method of studying educational practice and change (Navarro, 2005; Oldland, 2011). In the reflective case study approach, the author critically interrogates their own practical experience in a policy domain. This critical self-reflection is achieved by juxtaposing personal experience against data collected systematically using social science techniques (Scott, 2014). The paper's first author has 15 years experience in educational administration in Kazakhstan, including four years of involvement in the university-level implementation of Bologna reforms. This was supplemented by five qualitative interviews with educational experts. The first author also carried out a review of all relevant policy and administrative documents covering Kazakhstan's implementation of Bologna educational reforms.\n4. Case study of educational modernization: In this section we use the implementation of the European \"Bologna Process\" in Kazakhstan as a case to investigate the impact, progress and appropriateness of NPM reforms in post-Soviet republics. The original Bologna Process involved creating a European Higher Education Area \"to harmonize European educational programmes, so as to provide comparable, compatible and coherent systems of higher education in the region\" (Crosier and Parveva, 2013, p. 9). The original group of 29 countries signed up Bologna Declaration in 1999. It proposed \"a common degree structure, the 29 countries signed introduction of a common credit system and quality assurance mechanism, and the promotion of the mobility of students and academic and administrative personnel between institutions and countries\" (Crosier and Parveva, 2013, pp. 9-10). The European NPM-inspired Bologna reforms are widely regarded as an appropriate solution to the core problems and dilemmas of post-Soviet higher education (Country report, 2012). Kazakhstan joined the Bologna Process in 2010 and became its 47th signatory state.\n5. Discussion: Since obtaining independence in 1991, Kazakhstan has adopted a clear prioritization process: first economic development, then political modernization. There is evidence of gradual economic growth. In terms of political and public management reforms, there is much still to be done. Several fields of administration need further development, including refining the structure of the state administrative apparatus, transitioning to a results-oriented planning system, professionalization of civil servants and improvement of quality in service delivery. Corruption in the public sector continues to be a major obstacle to administrative reform. According to the report of Transparency International (2013), Kazakhstan occupies 140th position among 177 countries (in 2012 it was in 133rd position) (Transparency International, 2013). There are particular concerns about education. While data for Kazakhstan is not easily accessible, in Tajikistan, \"the Ministry of Education is reportedly the most corrupt state body in the country\" (Sabzalieva, 2014, p. 14). Therefore, in his Address to the People of Kazakhstan (2014), \"Kazakhstan Way-2050: The common goal, common interests, common future,\" President Nursultan Nazarbaev emphasized the problem of corruption. Importantly, he stressed the role of NPM principles in the anti-corruption strategy: \"the most important task is to continue the formation and implementation of the new anti-corruption strategy. Administrative reform should not involve tedious and unnecessary paperwork and documentation. We need to give more autonomy to local governments, while enhancing their accountability for results and increasing their accountability to the public\" (Presidential Address, 2014).\n", "role": "user" }, { "content": "Purpose: Since the collapse of the USSR, former Soviet republics have embarked on public service modernization, in most instances drawing on internationally dominant new public management (NPM) principles. Are post-Soviet republics ready for these administrative prescriptions? The paper aims to discuss these issues.\nDesign/methodology/approach: This paper discusses Kazakhstan's experience with the implementation of NPM through a qualitative case study of the country's adoption of the European Bologna higher education reforms.\nFindings: While implementation of the NPM-inspired Bologna program has produced significant achievements, there are also gaps and shortcomings. These are due to a remnant Soviet administrative practices including strong control by educational ministries, as well as incompatible organizational cultures and a tendency toward superficial formalism in the implementation process.\nResearch limitations/implications: NPM tends to be introduced in a top-down fashion as a taken-for-granted component of state transformation, without sufficient attention to the capacities, cultures and systems required for effective and accountable performance-driven administrative reform.\nOriginality/value: Kazakhstan's experience provides crucial insights into the governance structures, professional cultures and managerial capacities required for successful implementation of NPM in post-Soviet states.\n", "role": "assistant" } ]
The grandparent consumer: a financial "goldmine" with gray hair?
[ "Grandparents", "Expenditure", "Buying behaviour" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\nResearch methodology: Given the nature of the study, it was determined that a telephone survey would be the most appropriate method of data collection. Telephone interviews can be conducted in a reasonably short period of time, and thereby reduce the risk of extraneous events occurring during the data collection process which could adversely affect the quality of the information obtained. Additionally, this method allows the interviewer to provide a limited amount of explanation of the questions in the event that respondents have cognitive and/or audio limitations.Respondents for this survey were selected using a two-tier randomized sampling plan. Initially, the USA was divided into four broad geographic regions (i.e. West, Midwest, South, and East). Then, cities were selected within those regions that were believed to provide a reasonable representation of those regions. Consideration was given to population and demographic characteristics of the regions. Finally, respondents were randomly selected within each city. A total of 500 interviews were completed.The questionnaire contained six product categories:1. product presents (hereafter \"presents\");2. entertainment (e.g. movies, sporting events);3. school supplies;4. travel and vacations;5. gift cards; and6. financial investments (e.g. contributions to savings accounts).These were designed to cover areas in which grandparents might be more inclined to spend for birthdays and holidays, to assist with grandchild education, to spend time with grandchildren (i.e. entertainment, travel and vacations), and help grandchildren prepare for their future (i.e. financial investments). Because gift cards have become so popular, they were separated out and distinguished from other presents.\nResults: The results of this study are presented in three sections: Respondent characteristics, Overall annual spending on grandchildren, Annual spending by grandparent characteristics, and Perceived ability to spend money.\nRespondent characteristics: Respondents were asked their ages, how many grandchildren they have, how many of the grandchildren live in their households, the ages of the youngest and oldest grandchildren, and how often they see their grandchildren.Most respondents (67.0 per cent) are female. The largest groups of respondents (38.6 per cent) are 75 or older and 65 to 74 (34.5 per cent). Overall, 73.1 per cent of the respondents are 65 or older while 26.9 per cent are under 65 years of age.The single largest group of respondents (29.4 per cent) has two or three grandchildren, and the next largest group (25.0 per cent) has four or five grandchildren. Overall, 47.0 per cent of the respondent has three or fewer grandchildren while 28.0 per cent have six or more.While 86.0 per cent do not have grandchildren living in their households, nearly one in seven (14.0 per cent) do. Furthermore, 29.4 per cent of the respondents indicated that their youngest grandchild was two or younger while 28.4 per cent said their youngest was 11 or older. And, 47.0 per cent of the respondents see their grandchildren two or fewer times per month, while 29.4 per cent see them nine or more times per month.\nOverall annual spending on grandchildren: Respondents were asked how much they spend in an average year for selected items for their grandchildren. Their responses are presented in Table I.Shown in Table II are the percentages of respondents who indicated they spend at least something on their grandchildren for the product/service category. The highest per cent (92.4 per cent) spend on presents, and the lowest per cent (10.4 per cent) spend on travel and vacations.Among the respondents who do spend money on their grandchildren, the average amounts are shown below for each product/service category. Although relatively few respondents spend on travel and vacations, those that do so spend considerably more on this product/service category than on other products or services. While spending on travel and vacations had the highest average ($799.02), spending on school supplies had the least ($200.00). As shown in Table III, the average grandparents spend $1,026.98 per year on their grandchildren just in these categories considered in the study.\nAnnual spending by grandparent characteristics: Comparisons were made of annual spending based on selected grandparent and grandchild characteristics.Spending by grandparent gender\nPerceived ability to spend money: Respondents were asked if they feel they can afford to spend as much as they would like on their grandchildren, whether they owned their homes and had mortgages, and if they have borrowed money or taken out a loan since becoming a grandparent. Their responses are presented in Table IV.Despite the amount they already spend, the great majority of respondents (72.2 per cent) do not feel they can afford to spend as much as they would like on their grandchildren. Only 27.8 per cent of the respondents are spending as much on their grandchildren as they would like.Grandparents who felt they could afford to spend as much as they liked on their grandchildren spend substantially more than those that do not on presents ($835.15 versus $478.14), school supplies ($270.00 versus $173.98), entertainment ($305.97 versus $171.53), travel and vacations ($1,046.29 versus $520.83), gift cards ($254.54 versus $208.37), and financial investments ($666.13 versus $350.95).Respondents were asked if they owned their homes and if they still had mortgages on those homes. This was considered to be a significant indicator of current and future financial resources that could be used to sustain spending patterns. The great majority of respondents (91.6 per cent) own their homes, and 61.4 per cent do not have a home mortgage.Comparisons were made of annual spending based on home ownership and borrowing history. These were considered important elements of grandparent ability to finance their purchases for their grandchildren.Spending by whether grandparents own their home and have a mortgage\nManagerial implications and applications: Based on the findings of this study, it is clear that grandparents represent a high attractive market. Nearly all grandparents (94.8 per cent) spend money on their grandchildren. Among all grandparents irrespective of whether they spend on their grandchildren or not, the average annual spending on grandchildren is $973.58. Of the 94.8 per cent of grandparents who do spend on their grandchildren, their average is $1,026.98 annually. As shown in Table V, this makes the total market potential (i.e. average spending multiplied by the number of grandparents) more than $54.5 billion annually, or $149.4 million per day, in just the product/service categories included in this study.The majority of grandparents spend $500 or less on their grandparents annually, and more than three of four grandparents spend about $250. The implication is that grandparents may be somewhat prudent in their spending and thereby more able to sustain these levels of expenditures over time. This conclusion is consistent with the finding that nearly three of four grandparents are not spending as much as they would like on their grandchildren.This finding also supports the concept that marketing to large groups, even if they have relatively small incomes, can provide very significant market opportunities. For example, even a low level of $250 per year, the grandparent market amounts to nearly $13.3 billion (i.e. $250 multiplied by the number of grandparents).Conventional thinking that grandmothers are the biggest spenders on grandchildren may need to be revisited. Male grandparents believe they spend 13.8 per cent more on their grandchildren than do female grandparents on their grandchildren ($1,117.64 versus $982.08). The implication is that targeting either grandfathers or grandmothers could provide positive results.Older grandparents (65 or older) spend 8.7 per cent more on their grandchildren than their counterparts who are 64 or younger ($1,032.76 versus $950.53). This suggests that grandparents could become increasingly attractive as they become older because they may shift more of their spending to their grandchildren as their personal needs decline.Clearly, it \"pays\" for grandchildren to visit their grandparents. Grandparents who see their grandchildren three or more times in a month spend 54.2 per cent more than those that see their grandchildren two times or less ($1,223.81 versus $793.87). Companies targeting the grandparent market may want to focus on those who have grandchildren residing in the local area since opportunities to spend time with grandchildren should be enhanced.Grandparents who have only one grandchild spend less in total than those who have two or more grandchildren ($739.02 versus $1,087.23). This is critical since it shows that grandparents increase their total spending rather than just redistribute a finite amount among more grandchildren. In essence, the dollar market potential grows with a rise in the number of grandchildren. And with increased life expectancies, grandparents can expect to have more grandchildren, thereby creating an even larger market potential over time.There is an old adage that as consumers become older, their \"toys\" become more expensive. Grandparents appear to adhere to this since those that have grandchildren three or more years old spend 85.7 per cent more than those that have grandchildren two years of age or younger ($1,053.66 versus $567.31). This suggests that there is considerable upside growth potential in the dollar value of this market as people and their grandchildren get older.It appears that grandparents exhibit at least some financial responsibility when it comes to spending on their grandchildren. Grandparents who own their own home spend substantially more than those that do not on their grandchildren, and grandparents who have mortgages spend less than those that do not. Grandparents who have borrowed or taken out a loan since they became grandparents spend less on their grandchildren than those who have not taken out loans. The implication is that this market could be sustainable over time as grandparent spending seems to be influenced by their financial position.\n", "role": "user" }, { "content": "Purpose: - The purpose of this research study is to examine how significant the grandparent market is, their spending on grandchildren, and whether the marketing is sustainable over time. The \"grandparent\" is an important segment of the senior population, projected to grow to 25.9 per cent of the population by 2010.\nDesign/methodology/approach: - A telephone survey of 500 grandparents was conducted using a randomized sampling plan. Purchasing levels were designed to cover six product/service categories in which grandparents might be more inclined to spend: non-gift card presents, gift cards, school supplies, entertainment and travel, and financial investments.\nFindings: - The findings indicate that grandparents represent a high attractive market. Nearly all grandparents (94.8 per cent) spend money on their grandchildren - nearly $975 per year. This makes the total market more than $54.5 billion annually, or $149.4 million per day, in just the product/service categories included in the study. Furthermore, grandparent spending increases with the age of the grandparent, the number of times they see their grandchildren, the number and ages of their grandchildren.\nOriginality/value: - Given the size of the market and the spending power of grandparents, companies would be justified in directing considerable resources to targeting grandparents for goods and services for their grandchildren. While seniors may have limited spending capabilities, they appear willing to reallocate their funds to grandchildren.\n", "role": "assistant" } ]
Rapid needs assessment: an evidence-based model
[ "Design and development research", "Evidence-based model", "Model development", "Need analysis", "Needs assessment", "Performance problems" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Solving performance problems is a critical task in every organization, but these problems are often unknown entities, and mostly ill-defined and shrouded in ambiguity (Lewis and Bjorkquist, 1992; Jonassen, 2004). To solve particularly unstructured and complex problems, it is necessary to analyze the causes of a problem and determine how it should be addressed using a needs assessment. In general, a needs assessment is defined as the process that identifies the gaps between the current status and desired status (Gupta et al., 2007; Kaufman, 1992; Rothwell and Kazanas, 2004). This activity of identifying the causal relations among variables is considered a crucial part of solving a complex problem. Conducting a needs assessment has been widely recognized by many researchers in human resource development (HRD), corporate training professionals and even public educators as the front-end step of any intervention design. It is especially useful when organizations are attempting to solve an organizational problem, plan and allocate resources, and develop interventions to enhance individual and organizational performance (Gupta et al., 2007; Leigh et al., 2000; Sleezer, 1991).\nLimitations of current needs assessment approaches: Although there is some variation in the definition, needs assessment is generally defined as the formal process of identifying the gaps between current and desired results (Kaufman, 1992). A needs assessment is a front-end activity and essential step for designing and developing interventions for HRD and performance improvement (Dachner et al., 2013; Dick et al., 2001; Swanson, 2007).\nMethods: Design and development research\nFindings: From the analysis of the in-depth interviews and validation process, a needs assessment model was developed. The model consists of five interrelated steps:\nDiscussion: In this study, I explored the process of conducting a needs assessment based on evidence gained from the participants' experiences of conducting needs assessment projects. To investigate a more efficient and effective needs assessment process, I captured the lived experiences of the participants. A design and development research methodology was adopted to develop the needs assessment model. The salient finding of the study was confirming the needs assessment process based on real experiences from five different needs assessment projects, and developing a rapid needs assessment model. The rapid needs assessment model has distinct features from the extant models.\nImplications and recommendations for future research: This study contributes to building knowledge about the needs assessment process in that it is based on the actual needs assessment experiences of the participants. Most previous studies on needs assessment are the products of logic-based conceptualizations. Consequently, the extant models fail to demonstrate how an effective needs assessment process proceeds and what professionals should do. In this study, I used a design and developmental research methodology that is a particularly useful for constructing an evidence-based model. Thus, the rapid needs assessment model illustrates how to develop a needs assessment model for researchers.\nLimitations of the study: The present study has two limitations. First, this study was conducted based on the case experiences of a small number of graduate students who engaged in needs assessment projects as a part of their coursework. Although the setting and context of each project were authentic, still there are gaps between in-class activities and real workplace tasks. Furthermore, the knowledge, skills and expertise of the participants may not be equated with professionals or consultants. Although some of the participants had professional HRD related career experience, the collective experience of the participants may not be the same as professional analysts. Thus, it is worth noting that the findings of the study only reflect the experiences of the participants and context of this study.\n", "role": "user" }, { "content": "Purpose: Although conducting a needs assessment has been accepted as the first step in solving performance problems, many human resource development (HRD) and training professionals find it still difficult to conduct an effective and efficient needs assessment. The primary reasons are the lack of a guiding model based on empirical evidence and the pressing need for a cost-effective approach. To resolve these issues, this paper aims to explore ways to conduct a needs assessment and propose an evidence-based needs assessment model.\nDesign/methodology/approach: Followed the guidelines provided by Richey and Klein (2007), a design and development research was conducted. From the in-depth interviews with participants, the author proposed a more rapid model of needs assessment.\nFindings: The salient features of the rapid needs assessment model are utilizing hypothetical interventions at an earlier stage in the process and gaining a commitment from the client across the entire process. Due to the exploratory nature of the present study, the rapid needs assessment model calls for further investigation in terms of strengths and weaknesses, and validation through multiple-case studies.\nOriginality/value: Three significant features of the rapid needs assessment model are different from the existing needs assessment models: developing initial hypothetical interventions, engaging in an interactive and cyclical process and including the client in the whole process. The proposed model can also be used to guide HRD professionals which lays out the process and activities of needs assessment especially when the project due date is imminent, when performance problems should be addressed quickly or when the client requires results or at least the blueprint at an earlier phase.\n", "role": "assistant" } ]
Serving yourself: value self-creation in health care service
[ "Health care", "Value creation", "Self-service", "Value self-creation" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In My Life and Work, Ford (1922) remarked that \"any customer can have a car painted any color that he wants so long as it is black\". This famous quote exemplifies a value delivery philosophy where organisations determine, create, and deliver value to consumers. Here, consumers are left to accept or reject the value propositions determined by organisations, playing an inactive role in the value creation process, and are viewed as passive recipients of firm activities, separated from and outside of the firm (Deshpande, 1983; Payne et al., 2008). This value delivery perspective continued to underpin marketing activity in the years that followed and is best exemplified by the concept of the value chain (Porter, 1985). It continues to underpin marketing activity to date, particularly with regards to the attainment of competitive advantage for organisations.\n2. Literature review: 2.1 Self-service\n3. Conceptual framework: This study proposes the concept of value self-creation in self-services and uses health care management as a context to operationalise this conceptualisation. Health care management requires consistent and long-term engagement in various health self-managed activities to maintain well-being and is a suitable context to examine value self-creation. We define value self-creation as a value creation process whereby the consumer assumes primary responsibility for creating value in a consumption experience, occurring independent to the provider.\n4. A proposed model of value self-creation in health self-service: As it has been argued that consumers can be the \"sole-creators of value\" (Gronroos, 2006, p. 34), we developed a model of value self-creation (Figure 2).The model identifies key value drivers in the form of participant resources provided towards the self-service, key value dimensions experienced, and marketing outcomes.\n5. Method: 5.1 Study context\n6. Results: To answer the overarching research question, \"RQ. How is value self-created in health care self-management?\" and to test the hypotheses, SEM was conducted in the SPSS AMOS 21 software programme. The reported findings are assessed based on the estimated path coefficient b value with critical ratio (C.R. equivalent to t-value) and p-value. Standard decision rules (t-value >= 1.96 and p-value is <= 0.05) are applied to decide the statistical significance of the path coefficient between dependent variable and independent variable (Byrne, 2010).\n7. Discussion: The current study sought to address the overarching research question, \"RQ. How is value self-created in health care self-management?\" In doing so, the study first identified the consumer resources that are relevant in value self-creation, namely, behavioural contributions, cognitive contributions, and consumer readiness. The study also identified that the three value dimensions that were self-created, functional, emotional, and social, all had positive influences on important service outcomes of satisfaction with the self-service experience and behavioural intentions to engage with the self-service again in the future.\n", "role": "user" }, { "content": "Purpose: This paper aims to investigate the concept of value self-creation and provides a formal definition for this concept. The paper suggests that it sits within an overall continuum of value creation that includes value delivery and value co-creation.\nDesign/methodology/approach: A proposed model of value self-creation was developed and empirically tested in a health care self-service, bowel screening. An online, self-completion survey was administered to Australian men and women aged 50 years and above, as this represents the primary target population for bowel screening.\nFindings: The results of the structural equation modelling in AMOS suggest that consumers can self-create value, leading to desired outcomes of satisfaction with the consumption experience and behavioural intentions to engage with the self-service again in the future. The findings provide empirical evidence to suggest that consumers' behavioural contributions represent the most important consumer contributions in self-service, followed by cognitive contributions.\nOriginality/value: The study provides an empirically validated model of value self-creation in health care self-service. Much of the existing research on value co-creation has concentrated on traditional service types and is ill-placed to explain the value creation processes in self-services. This study offers originality by addressing this gap and demonstrating to service managers how they can manage consumer contributions towards a self-service and facilitate value-self creation, even though they are not present during the consumption stage of the consumption process.\n", "role": "assistant" } ]
Comparing German and Italian food banks: Actors' knowledge on food insecurity and their perception of the interaction with food bank users
[ "In-depth interviews", "Poverty", "Qualitative content analysis", "Food aid", "Knowledge forms" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Food insecurity is a problem that is occurring in many developed countries, including Western Europe. Approximately 20 percent of all German citizens are affected by the risk of food insecurity and social exclusion (Eurostat, 2015). In Italy, the situation is even worse with approximately 28.7 percent of the population affected by the risk of food insecurity and social exclusion (Eurostat, 2015). In both countries, food insecurity refers to people receiving food aid, and not being able to supply themselves with nutritious, safe and appropriate food to maintain an active life (Riches and Silvasti, 2014, p. 6). People affected are often dependent on social welfare (Tarasuk et al., 2014a), and in addition to their financial struggles, often have psychological and medical problems (Lorenz, 2012a, b; Caraher and Cavicchi, 2014; Lambie-Mumford and Dowler, 2014; Chiu et al., 2016).\nLiterature review: In an effort to systematize food bank personnel's knowledge, an existing knowledge classification system is considered. De Jong and Ferguson-Hessler (1996) presented four forms of knowledge: situational knowledge, conceptual knowledge, procedural knowledge and strategic knowledge. Situational knowledge refers to the knowledge on typical conditions or problem situations in a certain domain. Conceptual knowledge refers to knowing facts, understanding concepts and principles. Procedural knowledge is knowledge that allows identifying actions that can be used to solve problems. Strategic knowledge is the knowledge of what to do and when, by careful planning, informed decision making, as well as effective use of actions (De Jong and Ferguson-Hessler, 1996). These forms of knowledge originate from an educational context. In the present study, they are applied to a context of charitable food assistance.\nThe German and the Italian food banks: The German Food Bank started in 1993 as a social movement, and developed over time into a federal association with an umbrella structure (Federal Association of German Food Banks, 2017). The organization consists of local food pantries, which are united under the federal association. Local pantries differ in size, financial resources, and operational structure (Von Normann, 2011). Within the pantries, food given out to the beneficiaries. In addition, extra services such as providing breakfast to school children, distributing second-hand clothing, and assisting users with medical appointments, banking tasks and dealing with government authorities are offered (Lorenz, 2012a, b; Reiniger, 2011; Von Normann, 2011). Users need to be registered in the German welfare system as financially deprived, and provide official documentation of their need to the food bank, in order to receive these pantries' services. Among the beneficiaries are 24 percent children and young adults up to 17 years, 65 percent adults, 11 percent are retirees (Von Normann, 2011).\nMethods: The study uses a qualitative research approach to explore the research question in-depth. A qualitative approach is appropriate when a theory is to be developed, an unknown research topic explored, or a new perspective added to a well-investigated topic (Bitsch, 2005). As the knowledge of food bank actors regarding food waste and food insecurity in Germany and Italy remains largely unknown, a qualitative approach is appropriate. Further, exploring interactions within food banks requires the perspectives of multiple actors involved. Therefore, in-depth interviews were deemed suitable.\nResults and discussion: German and Italian food bank personal shared their knowledge about food insecurity and food bank interactions in their respective countries. Managers and volunteers showed at least situational knowledge in both countries. Furthermore, managers and volunteers shared experiences and examples from their social engagement. When sharing their perception of their interaction with food bank beneficiaries, German as well as Italian managers and volunteers emphasized the beneficiaries' positive or negative emotional reactions when receiving food. Particularly Italian food bank personnel described a variety of problems in the context of food insecurity.\nConclusions: The study investigated food bank actors' knowledge of food insecurity in Germany and Italy, as well as their perception of the interactions between food bank actors and food bank users. Results showed that all food bank actors interviewed had at least situational knowledge of food insecurity. Managers from the Italian food bank also showed procedural knowledge. Interactions between food bank personnel and users were affected by a mismatch of perception within the situation and feelings of gratitude, shame, anger and disappointment.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to investigate food bank actors' knowledge of food insecurity in Germany and in Italy, as well as interactions between food bank actors and food bank users. The study builds on a knowledge framework from an educational context and applies it to food banks.\nDesign/methodology/approach: The study uses a qualitative research approach. In all, 22 in-depth interviews were recorded, transcribed and analyzed through inductive qualitative content analysis.\nFindings: German and Italian food bank actors interviewed had at least situational knowledge on food insecurity. Some actors of the Italian food bank also showed procedural knowledge. Interactions between food bank personnel and users were affected by feelings of gratitude, shame, anger and disappointment.\nOriginality/value: The study explores food bank personnel's knowledge on food insecurity, which appears to be a knowledge gap, even though many prior studies were dedicated to food banks and food insecurity. The study contributes to knowledge systematization to provide best practice recommendations for volunteer-user interaction, and suggests how food bank managers and volunteers' knowledge can be improved.\n", "role": "assistant" } ]
Foresight versus FP7: comparing innovations in healthcare
[ "Europe", "Innovation", "Health services" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: The European Framework Programmes (FP) for Research and Technological Development (RTD) are the most important policy instruments of the European Union (EU) for the support and encouragement of European research. The present so-called FP7 budget for the funding period 2007-2013 is around EUR60 billion. However, the detailed objectives and actions vary from one funding period to another. Consultation with experts in the field of research and policy, together with future-oriented studies, are important tools for focusing the so-called working programmes and \"calls\" of each funding period.Foresight is considered an important tool for many governments, industries and other organisations to identify possible future foci in research. Outcomes are used to prioritise research areas and to establish cooperation between the academic world, industry and non-governmental organisations. The European Foresight Monitoring Network started its work in 2004 with the mapping of foresight initiatives, on the understanding that such initiatives can provide a strong view on what experts around the world believe the priorities in research should be.As the FP7 Work Programme can be seen as an example of strategy - including many priorities for research - and foresight is used as a process to identify priority areas for research, it might be assumed that the findings and perspectives of foresight exercises should largely overlap with the priorities in FP7. A comparison between both can provide insight on the alignment between the two. As the expert community contributes to both perspectives, a question emerges as to the extent future-oriented perspectives of the expert community make it into the FP Work Programmes. This is of interest since some stakeholders claim these Work Programmes to be mostly the product of political bargaining. Whilst FP7-foresight alignment will not necessarily falsify such a claim, it will suggest that FP7 programming is at least evidence-based and future-oriented. If, on the other hand, the profiles show a mismatch, then it could be argued that either FP policymakers should look more towards foresight for potential new topics for upcoming Work Programmes, or that foresight programmes should seek to better address those topics that are of interest to European policymakers. Essentially, these questions go to the heart of questions concerning the role and positioning of foresight vis-a-vis European public policy programming in science, technology and innovation.The question is how this comparison can be done. An in-depth discussion and analysis will take much time and will be biased by the experts involved. A more systematic method is possible through a framework developed by TNO for the analysis of different organisations and strategies on innovation. The core concept of this Dynamo approach is the use of a taxonomy on developments in innovation. This taxonomy, the product of several projects and some years of development, can be used to profile different entities on innovation in a comparable way.This paper describes the results of an analysis between, on the one hand, a selection of so-called \"flagship\" foresight initiatives collected by the EFMN and, on the other hand, the work programmes of the EU Seventh Framework Programme (FP7), using the Dynamo taxonomy on innovation. About 140 foresight initiatives were profiled, as well as the FP7 Key Actions from the present working plans of the FP7 Cooperation Programme (2007-2008). The paper is limited to innovations in healthcare, as a full analysis of innovation would create an overload of information and reduces the possibility of providing more in-depth analysis.\nResearch approach: Main objective: identifying match and mismatch\nThe Dynamo approach: The Dynamo framework\nAnalysing FP7 on healthcare innovations: FP7 backgrounds\nThe foresight perspective: A general analysis of foresight in innovation\nMatch and mismatches between foresight and FP7 on health: In Figure 11 an overview is given of the Innovation themes covered by the FP7 Cooperation Programme, or the selection of foresight activities, or both. Of the FP7 Cooperation Programme, an indication of the importance of themes is provided by budget allocations (percentage of health-related allocation). The indication of coverage by foresight activities is based on how many foresights address the theme, looking at the total of 140 foresights.The topics covered strongly by both FP7 and foresight activities include \"Cell and gene therapy\", \"Early diagnostics\" and \"Health information systems\". One can say that there is \"high common ground\" on these themes, which are seen as priorities in innovation. Themes such as \"Preventive healthcare\", \"Effective medicine\" and \"Medical imaging and radiotherapy\" are also considered important by both perspectives, although somewhat less of a priority. A third group of Innovation Themes are considered of less importance by both perspectives: \"Orphan drugs\", \"Personalized care\", \"Improved working conditions\" and \"Healthcare robots\".Turning to those Innovation Themes that FP7 considers important but which are not addressed significantly by foresight activities, the first is \"Health monitoring\", which includes over 10 per cent of the budget of FP7 on health, but which is not significantly addressed by foresight. This suggests it might be interesting to conduct a foresight exercise that identifies new ways of health monitoring and that establishes new networks around the topic. The second and third themes addressed by FP7 but not by foresight are \"Patient management\" and \"Animal testing alternatives\". Although these themes are not major priorities in FP7, their budget allocation is still significant. \"Integrated health management\" and \"Labour monitoring\" are themes addressed poorly by FP7.The third category of results includes the themes addressed by foresight activities, but poorly covered by FP7. Here, quite a number of themes surface. Striking among these is the importance of Innovation themes such as \"Control of infectious diseases\", \"Medical biotechnology\" and \"E-health\" in many foresights, but their relatively limited budget allocation in FP7. Indeed, the FP7 budgets for \"E-health\", \"Improvements in the healthcare system\" and \"Smart drugs\" are relatively poor, despite a relatively high number of foresights addressing these themes. Not addressed by FP7 and perhaps an interesting themes to be considered in future FP7 calls are: \"Nursing aids\", \"Healthcare consumer electronics\", \"Artificial organs and active implants\"[6] and \"Advanced therapies and therapeutic tools\"[7]. Another area is \"Medical treatments and technologies - devices or (bio)chemical substances - used for delivering these treatments\", in which new or advanced scientific or technological knowledge, materials and components, or design, construction, production and logistic principles are applied.The fourth category of results is the Innovation Themes from the Dynamo taxonomy that are not addressed by either foresight activities or FP7. The main conclusion here is that the number of such Innovation Themes is rather limited. Only \"Advanced domestic sanitation\", \"Handicapped tools/equipment\" and \"Passive (synthetic) medical implants\" are not addressed by either perspective. This can probably be explained by the fact that these Innovation Themes are niche areas with little economic value and have (partly) a low innovative image.\nConclusions: Before presenting the final conclusions, the research approach will be remarked upon. First, there were limitations owing to the number of foresights selected for inclusion in the study. Although a sample of 140 exercises seems to be large and most of the Innovation Themes (313/350) were covered, the frequency overview shows that most of the selected foresights are niche-oriented (the average number of Innovation Themes per foresight is 22). As innovations in healthcare represent only about 10 per cent of all Innovation Themes, just some 10-20 foresights will specifically focus on healthcare.A second issue is that the prioritisation was done by two different mechanisms:1. frequency (foresight); and2. budget allocation (FP7).Comparing them means comparing two different indicators. Thus, the results here should not be seen as definitive, although they do give a first indication of match and mismatch (and a better alternative approach is not presently at hand). However, further refining the methodological should include a more reliable allocation of budgets, as there is a systemic error in allocating budgets without correction on type of innovation (high investment, low investment).The overall conclusion of the analysis is that the healthcare innovations addressed by FP7 are covered by foresight exercises. And, vice versa, the innovations in healthcare discussed in the foresight community are also mostly addressed by FP7. However, the priorities are different. Overall, the budget of FP7 allocated to health is relatively high: almost 20 per cent of the entire FP7 Cooperation budget is allocated to address problems in healthcare and cure, a figure well above the average looking at the ten FP7 themes. It can therefore be concluded that health is an important topic. Analysis of the 140 selected foresight exercises showed that the coverage of innovations in \"health\" is more or less average, though this could be the result of the limited size of the sample. Looking at the individual Innovation Themes, those most mentioned in foresight and those addressed by FP7 are mostly aligned, even in their top five ranking.Looking at the mismatch, only a limited number of Innovation Themes show lack of alignment. Notable mismatches can be seen in \"E-health\" (high in foresight, low in FP7), \"Health monitoring\" (high in FP7, not in foresight), \"Artificial organs and active implants\" (high in foresight, not in FP7) and \"Advanced therapies and therapeutic tools\" (high in foresight, not in FP7). Moreover, some Innovation Themes were not addressed/mentioned by either perspective. Most of them can be considered niche areas, while others have a limited \"high-tech image\". The conclusion might be that they are not essential to research and innovation policy.The final conclusion is that the perspectives of foresight and FP7 are mostly aligned, perhaps leaving only some limited room for improved articulation between them.\n", "role": "user" }, { "content": "Purpose: - The EU Framework Programme and the foresight community are both focusing on future developments in innovation. This paper seeks to answer the questions: How much can they learn from one another to enhance their visions on the future? Are both perspectives aligned?\nDesign/methodology/approach: - This research used the Dynamo approach, which analyses the match and mismatch of innovation entities by characterising both by a codified taxonomy on innovation. This taxonomy is based on the experience of TNO in innovation. In total, 140 flagship foresight activities from the EFMN database are analysed, as well as the Working Programmes 2007-2008 from FP7 Cooperation.\nFindings: - The findings show that the perspectives of FP7 and the foresight community on innovations in health are highly aligned. Some interesting mismatches are identified that can be taken up by FP8 and the foresight community. Only a limited number of innovation themes are not addressed by both perspectives.\nPractical implications: - These results can help the foresight community to focus on important innovation themes in health not generally addressed and give input to the new FP7 working for 2009-2013.\nOriginality/value: - The results of the study show a more detailed insight into what innovation topics foresight and FP7 are mentioning/addressing.\n", "role": "assistant" } ]
Manufacturing flexibility through outsourcing: effects of contingencies
[ "Outsourcing", "Resource-based view", "Operations strategy", "Manufacturing flexibility", "Transaction cost economics" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In the dynamic and volatile environments in which most manufacturing companies operate today, flexibility is a valuable capability for competitive advantage (Schmenner and Tatikonda, 2005; Hutchison and Das, 2007). Manufacturing flexibility can be achieved through, among other means, modular product designs, advanced manufacturing technologies, efficient changeovers, and outsourcing. While manufacturing companies are increasingly adopting outsourcing with the aim of enhancing different types of manufacturing flexibilities, they find it challenging to realize such benefits (McDougall, 2006; Anderson, 2013). We will study four types of manufacturing flexibility - product, mix, volume, and labor competence flexibility - that are directly affected by outsourcing.\n2. Literature: 2.1 Manufacturing flexibility\n3. Theory: The main purpose of this research is to study the relationships between outsourcing and manufacturing flexibility. We posit that this relationship is moderated by contingency factors derived from existing theory. Hitt (2011) highlights two widely known theoretical perspectives when analyzing supply chain management-related topics: TCE (Coase, 1937; Arrow, 1962) and the RBV (Penrose, 1959). We invoke both these theoretical perspectives in deriving contingency factors for outsourcing-flexibility relationships.\n4. Methods: This research seeks to make a theoretical contribution and to help managers make decisions concerning outsourcing as a route toward increasing manufacturing flexibility. Given the multi-faceted nature of flexibility, and the differences of contexts in which companies operate, we presented, based on a synthesis of existing literature, the contingencies that determine whether and how outsourcing affects different types of manufacturing flexibilities. This resulted in ten propositions as stated in Section 3. Following these propositions, we engage in middle range theory development (Merton, 1968) by studying multiple cases of companies that use outsourcing to different extents and in different contexts. Middle range theory merges theory with empirical work. We do this by deriving propositions out of a literature review and then refining these propositions based on our case study data to develop hypotheses. Thus, we follow a grounded theory building approach (Eisenhardt, 1989; Yin, 1994; Stuart et al., 2002; Voss et al., 2002; Eisenhardt and Graebner, 2007) to gain a better understanding of the relationships between outsourcing and manufacturing flexibility. We analyze, in depth, a limited sample of selected companies that outsource activities with the aim of achieving manufacturing flexibility. The company level was selected as our unit of analysis because flexibility needs are considered from an overall company's perspective, even though the immediate results of flexibility gain may accrue in only a part of the company.\n5. Evidence from case studies: The following paragraphs describe our observations from the case studies, organized along the four theoretically derived contingency factors. Each section ends with grounded hypotheses that are refinements, based on empirical observations, of the propositions presented in Section 3. The analysis presented here focusses on the types of flexibility for which we found evidence of being affected by outsourcing, and moderated through the four contingencies.\n6. Discussion: In this section, we discuss the overall findings related to the propositions, which were based on the two theoretical perspectives - TCE and RBV - and the four sets of hypotheses, which incorporate the preliminary empirical evidence from our case studies. Our cross-case analysis points to the manufacturing flexibility benefits that managers can realize from outsourcing of manufacturing. Our analysis also sheds light on the characteristics of outsourcing relationships that can result in outsourcing having negative effects on different types of manufacturing flexibility. Mainly, our hypotheses point to the role of a double-edged sword that outsourcing can play by enhancing flexibility in some ways while reducing flexibility in others. Table II summarizes the theoretical perspectives and preliminary empirical evidence on which our hypotheses are based.\n7. Conclusions: In today's rapidly changing business environments, outsourcing activities are often widely adopted with the intention of achieving manufacturing flexibility. However, research studies conducted to date did not analyze the true effects of outsourcing on manufacturing flexibility, as they did not account for the multidimensionality of the capability. Our research takes a step toward filling that gap in the literature. In addition, we account for the fact that the flexibility achievements of an outsourcing company depend not only on the sole actions of the outsourcing company but also on some actions taken by its suppliers. Our theoretical arguments and empirical support leading up to the hypotheses highlight the need to incorporate characteristics of both outsourcing companies and suppliers into further research on outsourcing. Our hypotheses add specificity to the relationships of outsourcing with each of the types of manufacturing flexibility, further pointing to contingencies that moderate such relationships. The contingencies show that outsourcing activities should be analyzed carefully instead of being applied blindly, particularly since different related conditions need to be established in order to achieve positive effects of outsourcing on different types of flexibility.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to examine the influence of organizational context on the relationships between outsourcing and manufacturing flexibility. In doing so, the authors study four types of manufacturing flexibility: product, mix, volume, and labor competence flexibility.\nDesign/methodology/approach: Based on transaction cost economics theory and resource-based view of competitive advantage, the authors focus on economies of scale and scope, asset specificity, organizational learning, and dynamic capabilities as contingencies affecting outsourcing-flexibility relationships. Combining theoretically developed propositions with insights from case studies of 11 manufacturing companies that outsourced some portion of their manufacturing, the authors derive grounded hypotheses.\nFindings: Empirical results show that in some cases the effects of outsourcing on different types of manufacturing flexibility vary based upon some contingency factors.\nResearch limitations/implications: Due to the qualitative nature and the geographical focus of the empirical examination, applicability of the findings to other contexts may be limited.\nPractical implications: The authors point out specific contingencies that managers should consider when targeting manufacturing flexibility through outsourcing.\nOriginality/value: This paper presents the interrelationships among outsourcing of manufacturing activities, four types of manufacturing flexibilities, and theoretically derived contingencies. Based on evidence from the analyzed cases, the authors find indications that some contingencies moderate outsourcing-flexibility relationships. In addition, this paper introduces a new type of manufacturing flexibility: labor competence flexibility, which is defined as the ability of a company's workforce to deal with technology driven additions to and subtractions from products over time.\n", "role": "assistant" } ]
Developing an integrated quality network for lean operations systems
[ "Lean manufacturing", "Integrated framework", "Iterative systems", "Service ecosystems", "Resilient operational systems", "Total quality management", "Just-in-time (JIT)", "Total productive maintenance (TPM)", "Value stream mapping (VSM)", "Kaizen continuous improvement (CI)" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Lean production is a globally accepted concept based on the Toyota Production System (TPS) and it integrates different quality tools and techniques to focus on eliminating waste, not just material waste, but labor and time waste generated by some processes. The system is truly lean and optimized only when all of these wastes have been removed from the system. Lean production strives to maximize the value for the customer by designing an adaptive system, which is characterized by leveled flow and aimed at operational excellence.\nLiterature review: An extensive literature review provided a conceptual understanding of lean and core quality methodologies/tools employed by the manufacturing and service organizations. Shah and Ward (2007) defined lean operation systems as an integrated socio-technical system with its prime goal to eliminate waste and simultaneously minimize variability caused by supplier, customer, and other internal participants. This characterization of lean operation systems magnifies the need for an integrated management of the social and technical sub-systems of manufacturing or service organizations. The characteristics and impacts brought by lean tools and techniques have been presented by Womack and Jones (1996). The effective application of various lean practices had a profound impact in a variety of industries, including oil and gas (process industry), manufacturing, aerospace, electronics and service industry.\nFindings: The RepGrid study has identified that most manufacturing and service transformation systems have three distinct phases as indicated in Figure 1.\nConclusion: The widespread model of disparate localized quality systems for each transformation phase is deemed ineffective for achieving lean objectives. This study suggests that quality systems in a complex competitive environment must consider an integrated iterative approach. An iterative development of lean quality tools for multiple phases produces an integrated quality system. Such systems employ blending and extending of lean quality tools to multiple phases of the transformation system to synthesize agile and versatile quality system.\n", "role": "user" }, { "content": "Purpose: Most successful companies have adopted some type of improvement methodology to achieve optimum performance, high quality, lower costs and productivity. Some of the structured methodologies employed indiscriminately are total quality management, quality control, agile, lean and Six Sigma which yield varied results. The purpose of this paper is to explore how to harness the power of an integrated system of quality tools and techniques to create operational excellence. An integrated framework involves matching quality tools and techniques to the multi-phases (input, transformation and output) of lean manufacturing or service ecosystem.\nDesign/methodology/approach: Current research of lean quality systems provides a conceptual understanding of core tools employed by manufacturing and service organizations. Interviewing domain experts from a series of manufacturing and service organizations highlighted a common challenge. The challenge was lean tools and methodologies were selected and employed arbitrarily for the different operational phases, which resulted in selective synergies of tools between operational phases. This limitation resulted in rework and duplication of quality efforts through the diverse phases of the transformation system. This study is based on the hypothesis that all phases of an operational system must be linked by common tools and methodologies which enables harnessing quality benefits and synergies throughout the entire operational system. The study methodology trailed through cooperative inquiry using a case study approach to design an integrated framework of tools that facilitates a common platform for manufacturing or service ecosystems.\nFindings: This study suggests that quality systems in a complex competitive environment must consider an integrated iterative approach. An iterative development of lean quality tools for multiple phases produces an integrated quality system. Such systems employ blending and extending of lean quality tools to multiple phases of the transformation system to synthesize agile and versatile quality system.\nResearch limitations/implications: A limitation of this study is that the research of integrated framework is based on repertory grid technique only; it should be supplemented by other methods. Second, the proposed framework does not consider the complexity added by the internal and external stakeholders as they interface with the integrated system at different points with reference to phases of the system.\nPractical implications: One of the advantages of this method is its generality, instead of delivering a monolithic system at the culmination of long transformation process we rely on smaller quality sprints which are implemented sequentially at each stage or phase of the transformation system. The phenomena of incremental clustering of time-series of quality sprints for different phases results in true integration from end to end for a transformation system.\nSocial implications: This study helps investigate the personal constructs that users and managers employ to interpret and select quality tools or methodologies for the different phases of lean transformational system.\nOriginality/value: This study aims to understand the impact of blending quality and business process improvement tools and methodologies to enhance outcomes. The basis of this study is \"the power of multiplicity\" through which a diverse collection of improvement paths is pooled into an integrated framework of quality tools for lean and efficient operations.\n", "role": "assistant" } ]
Is the learning economy a viable concept for understanding the modern economy?
[ "Social trust", "Well-being", "Economic performance", "Catchwords", "Concept formation", "Learning economy" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Catchwords and phrases can be found in nearly every field of human activity where they weave themselves into the daily fabric of social and workplace discourse. Their uses are varied: impressing others, gaining social acceptance, suggesting expertise, signalling group or class adherence or conveying a hidden or indirect meaning. Periodically, new catchwords and phrases will emerge in economics and public policy. Since the 1990s, there has been a succession of these vogue words and phrases: the service economy, the information economy, the knowledge economy, the digital economy and most recently the new economy. Catchwords and phrases get spun off in the course of a number of social, political and institutional processes. They are frequently adopted in international, inter-organisational and inter-social fora as possible compromise reference rationales to which each national party can give whatever meaning it wants. They are therefore meant to remain as ambiguous and ill-defined as possible. Some of these compromise agreements on referent policy agenda make more rational sense than others thanks to the efforts of international civil servants who are connected with the best researchers. This, however, is not primarily why they are chosen in this highly negotiated process between delegates. Catchwords and phrases also serve as convenient reference points for politicians' justification of economic and social policies that require immediate personal sacrifice in the name of a putative better future. Every so often trend makers will market new catchwords or phrases. Sometimes they manage to create a bandwagon, fad-and-fashion effect around them as happened during the 1990s with the marketing of the notion of industrial clusters (Porter, 1990). Once co-opted into general speech, catchwords and phrases quickly lose whatever definitional value they may have possessed. Their collective adoption constrains curiosity and brings about a tunnel vision that limits inquiry, deflects from research into alternative avenues and hinders efforts to go beyond description to measurement, prediction and efficient prescription.\n2. Is the learning economy another catchphrase?: All concepts carry out a sorting role, gathering together attributes that are alike into a coherent grouping and differentiating them from other attributes. This sorting process serves to both define a concept and establish its borders within a field of similar terms. A concept whose attributes have nothing in common will be hardly distinguishable from neighbouring concepts. The twin functions of coherence and differentiation are arguably the two most important criteria to take into account in gauging the goodness or adequacy of a concept in the social sciences (Gerring, 1999)[2]. In this section, the broad features of the learning economy are sketched out to show how they - unlike catchwords and phrases whose sense is almost always ambiguous or even contradictory[3] - constitute a coherent set of relationships, institutions, values and norms affecting demand, production and exchange, which on the one hand describe this emerging economic regime and on the other differentiate it from other like concepts. Or, to put it slightly differently, our aim is to show that the concept of the learning economy possesses a core or essential meaning that defines it in time and place.\n3. A realistic Eutopia?: As well as possessing conceptual goodness or adequacy, one would also like a concept to be true. A concept must, to some extent, accurately portray a portion of the empirical world. There should be a degree of alignment between a concept's essential attributes and the natural phenomena it attempts to capture. As Gerring (2001, p. 49) asks, \"Have we achieved an accurate rendering of things with words? Better, have we achieved a good fit between words and things?\" Here, a simple comparative analysis of 16 European Union countries is undertaken to assess the factual soundness of the learning economy. As outlined above, we maintain that what matters for countries in the learning economy is the capacity of agents to learn, which coupled with a high degree of social equity[4], leads to a self-reinforcing, interactive learning growth process and ensuing economic success. To test this assertion a straightforward means of measuring capacity to learn and social equity is needed so as to express these key aspects of the learning economy. Expenditure on education as a proportion of GDP is chosen to measure agents' capacity to learn since ongoing investments in education and training are essential to sustaining a learning economy. For our second indicator, the \"people at risk of poverty or social exclusion rate\" is used to measure social equity. This commonly used indicator of inclusiveness refers to the share of people in the total population who are in at least one of the following three conditions: at risk of poverty after social transfers (income poverty), severely materially deprived, or living in households with very low work intensity. People are counted only once even if they fall within more than one sub-indicator. For ease of exposition, the \"people at risk of poverty or social exclusion rate\" is subtracted from 100 to obtain a monotonically increasing function which measures the share of people in the total population who are not at risk of poverty or social exclusion. For our measures of economic success, we first consider the employment rate since it is crucial to a country's economic performance that it has as many people as possible in the employed work force (Crouch, 2013). In a globalised world, economic success is increasingly dependent on the ability of a country's firms to mobilise their work force to innovate and create new products and services. Our second measure of economic success, patent applications to the European Patent Office per million inhabitants, provides an approximate estimate of a country's innovative capacity. Because high real-incomes and the security and amenity they provide are necessary for the emergence and operation of the learning economy[5], our analysis is focussed on the affluent member states of the old European Union (EU-15) together with the associate members of Norway and Switzerland. Luxembourg with a population of fewer than one million is excluded from our country sample as its small size tends to overstate its characteristics such as having the highest per capita GDP. The scores of the 16 countries on the capacity to learn and social equity indicators are presented in Table I, along with their results on the two measures of economic success. Unless otherwise stated, all definitions and data series are sourced from the Eurostat database and are for 2011 the most recent year for which data are publicly available for the expenditure on education as a proportion of GDP indicator.\n4. New required research: What has been put forward can equally be viewed as a research agenda whose purpose is to delineate the salient features of the interactive learning economy, locate its referents in physical space and advance our understanding of the functional role of learning in fostering economic growth. As with most new areas of research, more specific indicators of the learning economy will need to be developed. Preliminary investigations indicate that current indicators are not fully adequate to the task (DeBresson, 1997). At the outset, such indicators should be the focus of academic research but some may eventually be tagged onto innovative activity surveys in future revisions of the Oslo Manual. As the ability of economic agents to gain knowledge is at the core of the learning economy, collaboration with cognitive researchers will be essential to study the phenomenon we are talking about - how people learn in interactive social environments. There is also a need to work with new meso-level units of analysis that are based on human learning interactions, namely, networks of people and networks of organisations. Within this context, the role of different organisational forms in fostering learning is an especially important topic for research. Careful observations of how learning organisations operate in actual fact may yield valuable insights for later inferences and comparisons.\n5. Conclusion: The goal of this paper has been to explore the conceptual adequacy of the learning economy as formulated by Bengt-Ake Lundvall and its ability to shed new light on the modern economy where interactive learning at the individual, firm, and national levels is a key determinant of economic growth and well-being. This aim has been pursued through an evaluation of the learning economy's performance in regard to specific criterial requirements that are felt in the formation and use of all social science concepts along with an exploratory empirical investigation of its performance across 16 EU countries. We find that the learning economy represents a superior conceptual and heuristic starting point to the many fleeting catchwords and phrases found in economics and public policy. Even though it is still circumscribed in its development to a small number of industrially advanced countries of the world, we hold that the learning economy represents a realisable Eutopia or good place for other societies to pursue. Because it has already emerged in some developed countries, the learning economy provides a more credible policy rationale for politicians and social movements. If one has to ask people, communities and societies to sacrifice, save and invest for the future, it is more convincing to do so for a concrete and credible future that does exist or has existed someplace than for a conjectural and hypothetical future.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to explore the conceptual adequacy of the learning economy and its ability to describe the modern globalised economy. It is argued that unlike many fleeting catchwords and phrases found in economics, the learning economy represents a superior conceptual and heuristic starting point that reflects a new and emerging economic regime.\nDesign/methodology/approach: The paper examines those features of the learning economy which makes it a useful conceptualization and highlights some preconditions that are functional for its emergence. The paper then assesses the empirical validity of the learning economy and gauges its performance across 16 EU countries.\nFindings: The learning economy represents a viable and useful concept in economics and the broader social sciences, which synthesizes recent attempts to depict what is new in the world economy into an internally coherent whole while overcoming previous shortcomings. It reflects a tangible reality that has taken hold most firmly in a small but significant part of the world, the Nordic countries of Northwestern Europe.\nSocial implications: Because it has already emerged in some advanced countries, the learning economy offers a concrete exemplar for other countries to emulate. If one has to ask people and communities to sacrifice, save and invest for the future, it is more convincing to do so for a concrete and credible future that does exist than for some conjectural future.\nOriginality/value: The paper uses an epistemological perspective to analyse the concept of the learning economy as articulated by Bengt-Ake Lundvall.\n", "role": "assistant" } ]
The effect of work environment, leadership style, and organizational culture towards job satisfaction and its implication towards employee performance in Parador Hotels and Resorts, Indonesia
[ "Job satisfaction", "Leadership style", "Organizational culture", "Employee performance", "Work environment" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Performance is a measure of the success of an organization, either production organization or service provider organization. Hotel is one of the organizations which runs in the field of services which also requires \"performance\" for its benchmark of success. In general, the hotel operates its business on its own or in groups. The hotel which operates its business in groups is commonly referred as the hotel group; hotel group is operated by a hotel management company that is coordinating and maintaining the performance of the hotels that are under the assistance of that company.\nReview of literature: Work environment\nMethod: This research is classified as an applied research and used a survey technique. According to their explanation, this research is a causal or correlational research which means that it is intended to obtain an answer to the question \"why\". This research is also a multivariate research where it used an exploratory multivariate analysis. The population of this study was all employees of Parador Hotels and Resorts, as many as 642 people. Data sampling was determined by considering several factors of Slovin formula; it ranged between 100 and 150 data, and the minimum sample size was 100 or 5-10 respondents for each parameter, and guided by the 10-time rule of thumb. Based on these considerations and to anticipate such missing data or incomplete data, the authors determine the sample size by 200 people. Once the survey was shared, the returned and processed questionnaires were 179. The sampling technique of this study was proportionate stratified sampling, and the way to determine the elements of the sample was by accidental sampling.\nResults and discussion: Results\nDiscussion: The influence of work environment toward job satisfaction\nConclusion: Based on the results of the research and discussion, as noted earlier, it can be concluded about the effect of the work environment, leadership style and organizational culture on job satisfaction and their implications on the performance of employees at Parador Hotels and Resorts.\nSuggestions: Suggestions for the management\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to determine the effect of the work environment, leadership style and organizational culture on job satisfaction and its implication toward the performance of the employees.\nDesign/methodology/approach: The research population was the whole 642 employees of Parador Hotels and Resorts, Indonesia. The amount of the samples was determined with the formula of Slovin, and the Structural Equation Modelling (SEM) sample consideration was equal to 200 employees. As many as 179 questionnaires were returned and sent for analysis. Proportionate stratified sampling was used for the sampling technique, and sample elements were determined by accidental sampling method. The analytical method used in this study was descriptive statistics and SEM-Partial Least Square with IBM Statistical Package for the Social Sciences (IBM SPSS) Statistics 22.0 software and WarpPLS 3.0 program.\nFindings: The results show that work environment, leadership style and organizational culture have a positive and significant impact on job satisfaction, but only the leadership style has a positive and significant effect on the employee performance. Job satisfaction does not give a significant and positive effect on employee performance and it is not a mediating variable.\nOriginality/value: As indicated by the findings, the role of leaders in hotel industry, in this case general manager (gm), is of importance. Without a high-quality gm, job satisfaction and organizational culture will not be achieved.\n", "role": "assistant" } ]
Implementing Customer Relationship Management (CRM) in hotel industry from organizational culture perspective: Case of a chain hotel in the UK
[ "Marketing strategy", "Organizational culture", "Hotel industry", "Customer relationship management (CRM)", "Chain hotel", "International hospitality" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: With high competition of business environment, hotel businesses have realized that they need to improve the quality of their services to gain new customers (Wu and Lu, 2012; Lo et al., 2010). The key factor of success for hotel industries is not only the quality of the products or services, but also the quality and importance of guest relationships (Olsen and Connolly, 2000), particularly the relationship with existing loyal customers, as the cost of gaining new customers is approximately five times greater than the cost of keeping current customers. Customer relationship management (CRM) in hotels has a strong focus on identifying and retaining the most profitable guests and improving the profitability of less profitable guests (Wang, 2012). Previous studies showed that hotels that set up CRM systems to find and retain their best guests and develop long-term relationships with loyal guests could acquire greater benefits (Sigala, 2005; Wu and Lu, 2012). In recent years, CRM has become a widely discussed topic within the hotel industry (Lo et al., 2010) and has gradually been applied to the hotel industries to enhance the relationship between enterprise and guests, and increase business profit (Wu and Lu, 2012).\nLiterature review: Customer relationship management\nMethodology: Instrument and sampling\nResults: As mentioned earlier, partly completed questionnaires are disregarded and 214 questionnaires are used in the statistical analysis of the study. Table IV shows the demographics of the respondents. As demonstrated, the majority of the respondents (53.2 per cent) were female. More than 40 per cent of the respondents were between the ages of 30 and 39. Thirty per cent of the respondents had undergraduate education, and 30 per cent of the respondents had postgraduate education.\nDiscussion and conclusion: Theoretical contribution\nLimitations and future research: The main theory driving the research was Denison organizational culture (Denison and Mishra, 1995). Denison and Mishra (1995) defined organizational culture based on 4 traits and 12 indexes. Current research used the model based on its four traits, while future researches can consider the 12 indexes and investigate their potential impact on CRM implementations.\n", "role": "user" }, { "content": "Purpose: The purpose of this research is to empirically investigate the impact of organizational culture on implementing customer relationship management (CRM) in the hotel industry.\nDesign/methodology/approach: The research is conducted with a quantitative approach and a questionnaire adapted from Denison Organizational Culture Survey, and the Mendoza CRM model is the research instrument. The questionnaire was administered among 346 managers of a chain hotel in the UK. Gathered data were subjected to correlation and multiple regression analyses.\nFindings: The correlation analysis demonstrates that organizational culture factors of adaptability, consistency, staff involvement and mission have a positive significant impact on CRM implementations. The multiple regression analysis further showed that though CRM implementation is highly correlated with these four factors, its successful implementation is not dependent on all of them.\nResearch limitations/implications: The research is conducted in the frame of a case study where a UK chain hotel is selected; therefore, the findings cannot be generalized to a larger population. This research is conducted in the context of hotel industry and the result might be different for other industries. Due to the limitation in access to all employees, only managers were selected as the sample of the study and future studies with all staff may show different results.\nPractical implications: Organizational culture readiness is one of the most important requirements in CRM implementation initiatives. The results of this study will benefit hotel managers in measuring their organizational culture and improve it toward better CRM outcomes.\nOriginality/value: Previous studies on organizational culture and CRM with qualitative approaches have tried to highlight the role of organizational culture on CRM implementation or some have attempted to identify the organizational culture factors with potential impact on CRM implementations. However, very few of these studies have empirically investigated the impacts of organizational culture on CRM implementation, and this is the first study that empirically investigates this impact in the context of the hotel industry.\n", "role": "assistant" } ]
Promoting the health and wellbeing of young Black men using community-based drama
[ "Youth", "Drama", "Qualitative research", "Ethnography", "Community work (social welfare)", "Personal health" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: The Brother to Brother project\nMethodology: Aims of the evaluation\nFindings: This section summarises the qualitative outcomes of B2B for the young participants. The analysis presented here draws on the qualitative data collected throughout the project. B2B had the following outcomes for those boys and young men who took part.Self-esteem, confidence, and self-expression\nConclusion: B2B made it possible for a group of young and older Black men to come together into a \"safe space\" to discuss issues of concern to them (e.g. masculinity, the experience of being a Young black man, fatherhood, attitudes and beliefs about sex, etc.). It was hoped that, by the end of the project, the young people who participated have would achieved a greater awareness of sexual health issues, a more in-depth understanding of their emotional experiences, and of their relationships with their fathers, other young men and women.The quality and effectiveness of community health initiatives and services depends on the quality of the relationships established among the participants and practitioners involved. That is, in order to improve the health and well being of young people, effective ways need to be found to engage those young people who are who are least likely to engage with mainstream statutory health and social care services.B2B highlighted the strength of arts-based approaches as a means of promoting the well being and social development of Black boys and young men. It also demonstrated the value of youth work, informal education, and group work as effective means of engaging young men around health issues and promoting their health and wellbeing.\n", "role": "user" }, { "content": "Purpose: - This paper aims to explore the role of drama and theatre in promoting the emotional and social wellbeing of a group of young Black men living in south London.\nDesign/methodology/approach: - A qualitative methodology was used in a process and outcome evaluation of a drama-based initiative that aimed to promote young Black men's sexual and emotional health.\nFindings: - The research found that this community-based initiative was able to promote young people's self-esteem and a positive sense of agency. This was achieved by creating opportunities for self-expression, reflection and self-understanding, and through the development of relationships between participants characterised by trust and reciprocity.\nOriginality/value: - The evaluation points to the strengths of youth and community work and arts-based approaches in engaging young people around health issues and in promoting emotional wellbeing and a positive sense of identity among young people. The evaluation also highlights the usefulness of process-oriented qualitative evaluation as an appropriate way of evaluating and contributing to the ongoing development of initiatives that aim to use the arts in healthcare settings.\n", "role": "assistant" } ]
The impact of additive manufacturing on supply chains
[ "Barriers", "Additive manufacturing", "3D printing", "AM processes/technologies", "Supply chain implications", "Time horizon" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Imagine a world in which, whenever you want a product, all you need to do is to purchase a digital CAD model of the product online. This digital model can then be fed into your printer at home, and the bespoke product will be printed for you. In this world, you would neither have to leave the house to go to the nearest hardware store, nor would anyone else have to jump in a truck to bring the product to you. The printing technology in this fictional example is called additive manufacturing (AM; commonly known as three-dimensional or 3D printing). AM is, probably due to increased media attention, amongst today's best known digital manufacturing technologies.\nResearch framework: Figure 1 illustrates the research framework for the timeline of the impact of AM on supply chains. The framework assumes that due to the emergence of a variety of AM processes and given the barriers to AM adoption in various application areas, we will see different supply chain impacts at different points in time.\nResearch methodology: This research explores which AM processes emerge, what the barriers to their widespread adoption are, how long they will exist and how supply chains will be affected by their adoption at a certain point in time. To answer these interlinked, forecasting-oriented research questions, it was desirable to collect empirical data in multiple rounds, preferably from a small, heterogeneous panel of experts. The composition of the panel was designed to be heterogeneous to foster creativity and to integrate viewpoints from a variety of industries and from some knowledgeable academics (Okoli and Pawlowski, 2004). Given this nature and to increase confidence in this study's findings, we followed the methodological recommendations in Delbecq and Van de Ven (1971) and conducted a multi-stage survey study (see Wieland et al., 2016 for its application in SCM). The multi-stage survey study is akin to the Delphi technique (Rowe and Wright, 2001), providing full anonymity to a panel of heterogeneous experts while allowing for a moderated exchange of ideas. It is, however, less formalized in terms of the provision of feedback to the participants and the number of survey rounds needed. The multi-stage survey study balances the panel's input through a moderated and anonymous process that ensures integrity, which is in stark contrast to traditional onsite focus group discussions where it is usually difficult to encourage more reticent members to bring their ideas forward (Lloyd, 2011).\nResults and discussion: 3D printing: emerging AM technologies\nConclusion: This study contributes to the sparse literature at the intersection of AM and SCM and is a direct response to Holmstrom et al.'s (2016) and Rogers et al.'s (2016) call for further research on AM. As Rogers et al. (2016) have pointed out, of the more than 2,400 articles discussing 3D printing, only 12 addressed its potential impact on SCM. We believe this study is amongst the first toward addressing this gap.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to offer empirical insights on emerging additive manufacturing (AM) processes, barriers to their adoption and a timeline of expected impacts on the supply chain in the manufacturing industry.\nDesign/methodology/approach: A multi-stage survey study was conducted with a panel of 16 experts from industry and academia.\nFindings: Only five out of today's seven AM processes are of future importance, as are two emerging key processes. In total, 15 barriers to their adoption are identified, all of which are expected to be gone within ten years. Eight propositions are derived postulating as to whether and when supply chain impacts can be expected in terms of changes to supply chain structure, customer centricity, logistics and supply chain capability.\nResearch limitations/implications: \"Soft\" barriers are new to the literature, which has traditionally focused on \"technical\" barriers. Often-discussed barriers such as production speed and costs do not reflect the true concerns of the research panel. Furthermore, some of the supply chain implications discussed in both the academic literature and the media are found to be unlikely to materialize.\nPractical implications: The study summarizes AM processes, technologies, barriers and supply chain implications solicited from experts in the field.\nOriginality/value: This is one of the first studies to make empirical contributions to a vastly conceptual discussion. It is also the first study to give insights on a timeline for barriers and supply chain implications.\n", "role": "assistant" } ]
A multigenerational mobility study: empirical evidence from Brazil
[ "Brazil", "Inequality", "Intergenerational transmission", "Becker and Tomes model", "Mobility", "Three generations" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Most intergenerational mobility studies use data on two generations to estimate children's earnings as a function of parental earnings. This paper uses data on two and three generations to estimate other relations between a person's earnings and family background implied by the seminal Becker and Tomes mobility model.\n2. Theoretical model: Becker and Tomes model: a simple version\n3. Data: The PNAD is a nationally representative household survey conducted almost every year in Brazil. Survey participants are asked about the age, education, occupation and earnings of all members in the household. Also, they are asked to specify the relationship of each member to the reference person in the household, allowing identification of the father, sons and brothers for most households. Tables II and III present some descriptive statistics for males in the 1996 PNAD.\n4. Estimation results: Estimation of Equations (i) and (ii) requires data on at least two generations. Estimation of Equations (iii) and (iv) requires data on at least three generations.\n5. Conclusion: I estimate several relationships yielded by simple and modified versions of the Becker and Tomes (1979) intergenerational mobility model. The relationships are listed as follows:\n", "role": "user" }, { "content": "Purpose: Most intergenerational mobility studies use data on two generations to estimate the elasticity between son's and father's earnings. The purpose of this paper is to use a data set spanning three generations to estimate additional relationships between a person's earnings and family background yielded by intergenerational mobility models such as Becker-Tomes (1979) model and modified versions of it.\nDesign/methodology/approach: The paper uses data from the 1996 PNAD - a nationally representative household survey in Brazil. The author builds a data set consisting of 5,125 grandfather-father-son triplets by taking advantage of two characteristics of Brazil. First, commonly in Brazil, individuals live with their parents until they marry. Second, individuals tended to quit school and begin working at an early age. As a result, there are many households with adult sons who are not at the very beginning of their working careers. Since the sample is limited to households with adult sons, the author applies Heckman (1979) estimation procedure to address selection bias.\nFindings: Estimation results contradict some predictions of simple versions of the Becker and Tomes model. The paper proposes a modified version of the Becker and Tomes model that allows for a skipping generation effect, and finds that family background explains 34.9 percent of the variation in earnings among males aged 16-27 in Brazil. If there were no differences in endowments (talent, IQ, health, physical appearance, attitudes toward work, family connections, etc.), the variation in earnings would fall by no less than 26 percent. If it were possible to eliminate differences in investment in human capital, the variation in earnings would fall by at most 21.1 percent.\nResearch limitations/implications: The paper has two main data limitations. First, the 1996 earnings of the fathers and sons are used as proxies for lifetime earnings although the transitory component of one-year earnings may be quite large, particularly at young ages. Second, in spite of the efforts to deal with the sample selection bias, the paper shows that the intergenerational elasticity in earnings for the sons aged 22-27 is about 14.6 percent lower for the subsample of households with adult sons than for the full sample.\nPractical implications: The paper finds evidence supporting the existence of a direct effect of the grandparents on the grandchildren beyond their influence on the parents, and reinforces consideration of this factor in intergenerational mobility studies.\nSocial implications: The findings in this paper may suggest a room for improvements in economic outcomes of children in less privileged families through investment in formal education as well as policies that considers other aspects of a person's life. For instance, Bolsa Familia - a Brazilian government program that provide cash allowances to poor families conditional on children school attendance - may improve the economic outcomes of poor children by enforcing formal education and by lessening the children hardships at home.\nOriginality/value: The paper proposes a modified version of the Becker and Tomes model which allows for a skipping generation effect. Under the assumptions of the modified model and in hand with a three-generations data set from Brazil, the paper estimates a lowerbound for the variation in earnings explained by differences in endowments across families, and an upperbound for the variation in earnings explained by differences in human capital.\n", "role": "assistant" } ]
Rethinking strategic marketing: achieving breakthrough results
[ "Strategic management", "Marketing strategy", "Strategic planning" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Competence in strategy is the sine qua non of the marketing manager. Yet even among well-schooled and experienced marketers, strategy failure rates remain uncomfortably high. There is little room for error in \"getting it right\" strategically in today's fast-moving marketplace with increasingly sophisticated competition and the performance of marketing under scrutiny as never before. Get marketing strategy wrong and second chances will be hard to find. Although marketing strategy is critical to business success, the current model of strategy making taught to and expected of the average marketer is largely incomplete.\nThinking strategies: One of the greatest challenges facing marketing managers and their organizations today is the development of the critical thinking and problem-solving skills needed to adapt quickly and effectively to change. Pervasive doubts about whether people can actually learn to think \"better,\" however, cloud more constructive discussions about how managers can learn to think more critically and creatively. A widely held (and seldom questioned) model of thinking is that it is basically an assortment of cognitive processes that is either \"hard wired\" into an individual from the start or acquired over time. Common sense and common experience, however, suggest that there are many thinking processes that can be improved using techniques commonly applied to other forms of skill building. A marketing manager's ability to think critically and solve problems effectively depends on the thinking strategies (networks of skills) used and when and how they are implemented. The list below shows some examples:* Embrace orthodoxy. Samsung parlaying its vast knowledge of consumer electronics into a cohesive strategy for designing digital devices for the home - many of which lead their product-market category.* Challenge orthodoxy. P&G emerging as a growth company after adding lateral marketing and seeking innovation from outside sources (new products, new markets and new technologies).* Analogize this. Retailer TK Maxx targeting \"something new every day\" strategy at customers for whom \"shopping is like hunting.\"* Break assumptions. IBM rejecting traditional market research techniques in favor of listening to customer stories to really understand their markets.* Cross-fertilize. Hewlett-Packard extending beyond their traditional industry boundaries to open up new markets and to deliver new value to customers with HP PhotoSmart.* Attract opposites. Lou Gerstner telling IBM'ers, \"We're going to build this company from the customer back, not from the company out.\"Embrace orthodoxy\nStrategic decision making: What determines what happens in and to a company? If a competitor's improved product quickly steals away several points of market share, for example, it is reasonable to ask why the company's marketers did not anticipate the threat and respond strategically to retain those customers. Inevitably, even if factors outside the company contribute, lying beneath specific explanations for what happened is a fundamental constant: the decisions from which the consequences resulted. In the specific instance described, the marketers in one company apparently made better decisions than those in the other. Why? Is it simply luck or do some marketers make decisions with the aid of better resources, better tools, and better procedures?When marketing strategy goes especially well or badly, the quality of decision making is only rarely cited as the \"official\" explanation. While marketing strategy results - successful or otherwise - get the headlines, the processes behind the decisions leading up to those results generally receive remarkably little scrutiny, almost as if they occurred by chance rather than by design. Consequently, efforts and resources devoted to improving marketing decision making are largely nonexistent. It is not surprising then that few marketing managers realize how important the specific decisions they make - or fail to make - are in determining how and how well their company does. In fact, better marketing decisions constitute a core asset of the company. Proven principles and techniques that marketers can use to improve their decision making are described here (see the list below for examples):* Remind me - why are we doing this? KFC stepping back from its decision to position fried chicken as part of a healthy diet when the market's collective jaw dropped at the idea.* What's this going to cost - really? Cisco investing heavily in acquisitions that failed to add value to the company.* What are my options? Hyundai seeking sustainable competitive advantage in the US auto market using a value strategy coupled with innovation and customer intimacy.* What are the chances? Numerous companies abandoning expensive investment in CRM when users failed to adopt it.Remind me - why are we doing this?\nStrategic thinking competencies: In order to approach strategic thinking with the same discipline applied to other forms of marketing competence, a competence model is needed. What strategic thinking skills do marketers need to possess and how do they contribute to thinking strategically about marketing? The list below shows some key competencies of strategic thinking that are most useful for marketers:* Discerning alternate realities. By reframing their thinking about customers to focus on lifestyles rather than age, experience rather than hardware and interactive entertainment rather than technology, Sony is now the market leader in digital gaming.* Asking the right questions at the right time. Nokia misjudging consumer demand and failing to grasp the immediacy and impact of the disruptive forces that were undermining entry barriers to their market.* Determining the critical issues. Companies failing at customer satisfaction initiatives because they misunderstand the interactions between the customer, the employee and how the company treats its employees.* Escaping the obvious. Microsoft considering what might have been the result had they not redirected resources to bolster the security of their products and services.Discerning alternate realities\nVisualizing strategy: Thinking strategically will be a futile effort if it fails to result in marketing strategy that is easily understood and communicated. Visualizing strategy (the fourth dimension) is all about identifying and mapping the key elements of a marketing strategy. Managers who use the described techniques find that they help to clarify and translate their vision and strategies into action. The list below provides examples:* Seeing the future. Shell using scenario planning since the early 1960s.* Envisioning the essential. Using a graphic display of purchasing manager perceptions of value in the mid-size router market to explore key competitive factors in the industry.* Mapping the territory. GM creating a strategy map to provide a visual representation of its strategy.Seeing the future\nSummary: Strategic thinking offers marketers and their companies the opportunity to move beyond the automatic application of traditional strategic frameworks to identify and to achieve breakthrough marketing strategies. The real power of thinking strategically about marketing lies in its potential as a source of competitive advantage equally applicable to creating superior value for customers, erecting barriers to competitors, or enabling more rapid adaptability to change.The strategies proposed here are all designed to reduce the risks of marketing strategy failure. The raw materials of strategic thinking (creative and critical thinking, decision making and problem solving) can be transformed into a practical system for enhancing the strategic promise and performance of marketing and its practitioners. In addition, using visualization to communicate marketing strategy in a concise, articulate and compelling manner significantly increases its likelihood of success.The marketing profession is long overdue in applying the same attention and rigor to strategic thinking that it applies to strategic planning. A robust planning framework is vital to marketing success; but it is no longer sufficient by itself, given the rapid rate of change. The need for managers who can think strategically about marketing will only increase. Understanding and applying the principles and techniques presented here can help marketers and the marketing community rise to this challenge - and opportunity.\n", "role": "user" }, { "content": "Purpose: - To provide a practical overview of strategic thinking concepts and practices for marketing and other managers that can help them improve their strategy making.\nDesign/methodology/approach: - The raw materials of strategic thinking are illustrated by case studies and examples from a range of industries to aid managers in their successful application. The principles and techniques are presented in four categories: thinking strategies, strategic decision making, strategic competencies and visualizing strategy.\nFindings: - Offers marketers an approach for moving beyond the automatic application of traditional strategic frameworks to identify and to achieve breakthrough strategies. Recognizes the real power of strategic thinking as a source of competitive advantage.\nPractical implications: - The principles and practices proposed represent a practical system for enhancing strategic promise and performance, as well as for reducing the risks of strategic failure.\nOriginality/value: - The marketing discipline is long overdue in applying the same attention and rigor to strategic thinking that it applies to strategic planning and this paper offers managers practical help in identifying and developing strategic thinking competencies.\n", "role": "assistant" } ]
Characterizing consumer concerns about identification technology
[ "Marketing", "Ethics", "Privacy", "Radiofrequencies", "Identification" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: The use of technology to identify consumers has been discussed extensively in both the popular press (Jones, 2004; Krim, 2004; Vijayan, 2004) and in the general academic literature (Alterman, 2003; Peyravian, et al., 2000; Rejman-Greene, 2001). However, scant attention has been afforded to these technologies in the consumer literature. This is surprising, since biometric devices such as fingerprint and iris scanners, as well as smart card identification systems, continue to grow in popularity (McHugh, 2004). Such technologies are finding use not only in settings requiring high security (e.g. airports and nuclear facilities), but are increasingly used for reasons unrelated to security, such as providing convenience to the consumer and information to the marketing entity (Brass, 2003; Capizzi et al., 2004; Stock, 2004).\nOverview of identification technology: Biometrics and radio frequency identification devices (RFIDs), in particular, offer a number of benefits over traditional identification methods. For instance, there is no password for the consumer to remember and no identification card to lose. Further, transaction times are generally faster, identification accuracy rates are higher, and it is harder for consumers to disavow legitimate purchases (Capizzi et al., 2004; Rossi, 2004).Although new identification methods offer a variety of benefits, few new technologies have generated more controversy; the issue of collecting unsolicited data using these technologies will clearly be debated well into the future (Milne, 2000). For instance, groups espousing strong religious beliefs, such as The Last Day Ministries, believe that the radio frequency identification technology firm Verichip Corporation offers a product that will serve as a catalyst for an apocalyptic showdown between the forces of good and evil (Scheeres, 2002).Further, biometrics and RFIDs have been criticized for setting a rather dark precedent, conditioning students at a young age to embrace the idea of big brother-style biometric tracking. According to Chris Hoofnagle, Associate Director of the Electronic Privacy Information Center, \"If ever there was a generation that would not oppose a government system for universal ID, it's this one (Graziano, 2003).\" Others have expressed concerns that increases in screening and surveillance technologies will make the whole world look like one giant airport (Emery, 2004). If technology firms do not allay these fears, escalating public resentment may effectively ban valuable implementations of the technology (Cline, 2004).Finally, it has been noted in recent literature that issues surrounding implementation of technologies such as RFID are multidimensional in nature. For example, Jones et al.(2004) note that channel intermediaries face not only strategic and operational challenges to the implementation of RFIDs, but concerns pertaining to civil liberties as well. Many of these problems stem from consumers' fear of being tracked once they have left the retail environment. Many consumers also worry that retailers will use the technology to obtain private information about them without their knowledge.Clearly, consumers continue to express numerous concerns regarding identification technology. However, perhaps because of the novelty of the topic, no academic research could be found that has been devoted to uncovering dimensions underlying consumer concerns about these technologies. The focus of this research, therefore, is to identify the dimensions of consumer apprehension regarding identification technologies.\nMethod and procedure: About 119 students at a public southeastern university in the USA were asked to provide two concerns they would have about identifying themselves, providing purchase information, and/or paying for purchases using each of six different identification technologies. The six technologies presented to respondents were, in general order of invasiveness (from least to most): key chain tags, voice recognition, fingerprint scanning, facial scanning, retina scanning, and computer chip wrist implants. The use of multiple technologies, along with an open-ended response format combined to elicit a broad pool of responses. The authors selected the technologies based on a review of the popular press articles cited previously. Descriptions of the technologies were presented to respondents before they completed the survey instrument and are shown in Table I.Content analysis was employed in order to place consumer concerns regarding biometric technology into meaningful categories. Respondent quotes constituted the basic unit of analysis; 1,371 subject concerns were examined. A deductive content analysis uses predetermined categories to organize data; however, no research existed from which to build categories. Therefore, an inductive approach was used (Scanlan et al., 1989). This approach allows clusters to emerge from the data itself, therefore, categories were not pre-specified (Lincoln and Guba, 1985; McCracken, 1988; Miles and Huberman, 1984). Instead, after jointly reviewing a sample of responses, the authors clustered responses into similar groupings based on underlying uniformities (Glaser and Strauss, 1967). These clusters were used to create a provisional categorization scheme that was used in the initial stages of analysis (Spiggle, 1994).Six categories of concern were established: convenience and health, which relate to the physical aspects of identification technology; and humanity, identity theft, morality, and privacy, which focus on how that technology may be used by retailers and others. Written descriptions representing each category were created based on the authors' interpretation of that category. These descriptions were augmented by literature from the popular press and scholarly works. Each author then independently coded the responses into one of the six categories using the written descriptions as a guide. Responses that did not fit into any of the provisional categories were temporarily placed into an \"other\" category. Both authors then met to discuss the \"other\" responses, and jointly determined that these responses merited the creation of two additional categories, cost and personal security, both of which also relate to the physical technology itself. Each author then independently reviewed the \"other\" responses, placing them into one of the two newly created categories. A detailed description of each of the eight categories is included in the \"Results\" section below. There was 94 percent agreement between the authors as to the placement of responses into the final eight categories, which exceeds acceptable thresholds of interrater reliability (Perrault and Leigh, 1989). Disagreements in coding were resolved by discussion.\nResults: Analysis of the open-ended responses indicated that consumer concerns were broadly dispersed across the six identification technologies. As noted above, eight specific categories of concern emerged which fall under three general themes:* concerns regarding the technology itself (e.g. whether the device will malfunction, cost too much, or lead to physical danger);* concerns regarding the security of collected data; and* existential issues regarding interaction with the technology.Each of the eight categories, grouped by these three general themes, is described in detail in the following sections.\nConcerns regarding the technology itself: Convenience concerns\nConcerns regarding the security of collected data: Identity theft concerns\nConcerns about existential issues: Humanity concerns\nDiscussion and directions for future research: By employing an open-ended survey approach, eight significant areas of consumer concern emerged. In order of prevalence among consumers, they are convenience, identity theft, health, privacy, humanity, personal security, moral and cost. The elicitation method and inductive analysis employed in this research were essential to identifying the diversity of concerns that respondents possessed with regard to identification technology. The present study uncovered specific areas of concern that should be addressed and alleviated prior to a technology's introduction. Well-conceived and thoughtful implementation could very well translate into a competitive advantage for some retailers. However, businesses who ignore consumer concerns may find themselves in the midst of a public relations nightmare.In addition, while the respondents had many legitimate concerns, others were implausible or based on misinformation. For example, comments about scanning \"lasers\" burning eyes, voice recognition systems not recognizing lisps, and computer chips being tracked by satellites reflect consumer misunderstanding of these technologies. It is imperative that retailers and others using these identification technologies provide their customers with accurate information in order to correct their false impressions as well as address legitimate concerns.The authors believe that the ultimate goal of research in this area should be to develop a model of consumer interaction with identification technology. Therefore, an empirical follow-up study, using a representative sample, should be conducted in order to expand the findings of this research. This would offer a number of distinct benefits. First, it would provide researchers with a tool to help forecast acceptance levels of a new technology in its development phase, before the technology is mass-produced. This information would be important to businesses that manufacture identification technology products and, more importantly, to retail firms that purchase and implement such equipment.Identification technologies, in all likelihood, will remain a controversial topic for years to come. Examined more thoroughly, this area of inquiry has the potential to enhance our understanding of consumer fears about forthcoming identification technologies. It is the authors' hope that the findings in this study will serve to create a dialogue among business, government and consumer advocates and provide an impetus for future research.\n", "role": "user" }, { "content": "Purpose: - Retailer's use of advanced technology to identify consumers has broad and, for many individuals, disturbing social implications. This study seeks to uncover consumer concerns regarding various identification technologies that may be encountered in present and future retail environments.\nDesign/methodology/approach: - Subjects were asked to describe, in writing, concerns that they might have regarding six identification technologies. Content analysis was employed in order to place consumer concerns regarding biometric and radio frequency identification device technology into meaningful categories.\nFindings: - Eight specific categories of concern emerged which fall under three general themes: concerns regarding the technology itself, concerns regarding the security of collected data, and existential issues regarding interaction with the technology.\nResearch limitations/implications: - This research provides a framework for academic researchers to further develop a model of consumer interaction with identification technology. Therefore, an empirical follow-up study, using a representative sample, should be conducted in order to expand the findings of this research.\nPractical implications: - This research study makes use of qualitative elicitation techniques to uncover specific areas of concern that may be addressed and alleviated prior to a technology's introduction.\nOriginality/value: - This paper fills a gap in the existing literature with regard to our knowledge of consumer concerns and fears about identification technology.\n", "role": "assistant" } ]
Transformation of a research centre toward an innovation partner in the construction sector
[ "Research organizations", "Innovation", "Construction industry", "Action research", "Partnership", "Finland" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1 Introduction: In Finland, the construction sector became academic when the technical university was founded in the beginning of the twentieth century. The governmental technical research centre Technical Research Centre of Finland (VTT) under the Ministry of Trade and Industry extended research in the year 1942. At VTT, activities have been closely related to the actual needs of various phases of construction sector and society: during the rebuilding of the country in the 1950s and late urbanisation 1970s, the demands concerned mainly quality of construction and design codes. Typically, the employees worked in several rather independent units, until major organisational changes in the 1990s. The current activities cover main fields of research and development in the sector, from markets and processes to materials, structures and technical systems.At the end of 1990s, VTT started to allocate resources to new types of projects that aimed at integration of diversified activities in the construction sector. There were both research-based findings and quests from clients that pointed to multi-technology and multi-disciplinary approaches. A growing number of studies showed that performance of buildings needs to be predicted and managed in a holistic way; further, ageing of society emphasized user-orientation. Development of computational methods facilitated new interests.The operational environment of VTT changed also due to increased competition for funding opportunities. Traditionally, a great deal of projects was purely research-driven, both in internal and joint-funded projects. Re-thinking of position of research took place in many OECD countries; initiatives to address the \"third task\" to universities as catalysts of innovations were common especially in the USA. In the European Union, a transition from a technology policy to innovation policy took place at the end of 1990s, and in the Lisbon strategy was launched in 2000. Nowadays, it is commonly expected that research organisations co-operate closely with companies.At the end of 1990s, several factors in internal and external circumstances challenged to search for integrated research methods and collaborative customer-oriented working methods. In order to realize this transition, and to serve the changing construction sector, an internal Action Research process was set up at VTT.\n2 Transformation process: 2.1 Series of internal projects\n3 Results: from co-operation to collaboration: 3.1 Identification of needs for R&D services\n4 Discussion: The succeeding projects of an internal development process of VTT were conducted according to principles and methods of the Action Research. During the process, collaborative learning changed the mindset of the participants to understand an expert service as a process that may incorporate some commercial packages but essentially it means new working procedures. This approach was chosen after trials to commercialize fixed service packages; this way was found both risky and costly.As a starting point, the need to learn about the product development methods was understood. The rapid changes in technologies had generated new thinking in many technology fields but generic methods were not usually applied in the construction sector. The learning material was brought to the project from other sectors; the product development methods and innovation management are not taught in civil and environmental engineering departments of universities.The content of a possible service started to get a shape when the differences of science-based research work and research-based development were understood better. The main point is the objective of the work: science-based research aims at new methods and new knowledge, the product development strives for successful innovations. The picture of research-based product development was created as a combination of experiences of the own organisation, learning process and surveys on companies.The process model and tools were not well validated or tested in practice. There were many reasons and explanations; one of the most influential reasons was that the researches involved in the internal process were distributed to new positions in the fundamental organisational change of the entire research centre in the year 2005. The strategy of VTT was oriented to an innovation organisation. New internal activities were established to expand the research-based services, and new tools and approaches were taken into use. The next stage was also necessary as the research market was changing global, and the companies were competing on global markets. The changes in business models have networked the various companies thoroughly. This phenomenon is graphically shown in Figure 8.\n5 Conclusions: A research-based service for product development in the construction sector was developed in an internal learning and action process. The start of the process was one research group and a true development case. Through a spiral learning process and various trials, and expanding the number of participants, the service became defined as a collaborative process with clear decision-making steps. The service consists of description of the process and its phases and subdivision to various tasks. The project management and quality assurance are an essential part of the process.An expert service means internally that the organisation recognizes and legalises a matrix organisation that enables collection of various expertises and capabilities from the whole organisation and in special cases from its allied companies.As a multi-functional research organisation, VTT has competences and research infra to develop its activities as a service provider for networked and outsourced product development of companies. However, there are challenges for organisational learning and new working methods. Innovations - understood as invention plus exploration - ask for systematic work that combines different competencies in creative ways.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to study innovation processes of the construction sector from the viewpoints of research-based consultation at VTT Technical Research Centre of Finland.\nDesign/methodology/approach: - The starting point of an internal analysis and benchmarking concerning the role of a research centre come from a Finnish product manufacturer that challenged its capacity to a short introduction lead-time. Understanding as an opportunity, a core team is set up representing different expertises, and a co-operation project is proposed to the manufacturer. In addition, an in-house follow-up project is created. After completing successfully the product development project, and based upon findings of the follow-up project, a feasibility study is conducted concerning consultation packages for development of construction products and concepts. However, during this study phase, the objective of packages is replaced by an ideal of a collaborative process that integrates competencies, tools, and management of a research and development organisation, and links them to those of the customer.\nFindings: - The process model of a multi-disciplinary commercialized research-based consultation is a new way to organise work of a research centre, and its core is collaboration with the customer. During the follow-up process, feasibility study and development of a research-based process, approaches of the Action Research were applied. A market study and two external evaluations were made that gave partly consistent partly inconsistent results with the learning outcome concerning appeal of the new type of partnership. New sector-wide strategies at the national and European level show however that key findings were justified. Next challenges in the development of the multi-disciplinary innovation services concern internationalisation, external networking and globalisation.\nOriginality/value: - The paper offers insight into the transformation of a research centre into an innovation partner with the construction industry.\n", "role": "assistant" } ]
A fuzzy embedded leagility assessment module in supply chain
[ "Benchmarking", "Decision support systems" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: lean, agile and leagile manufacturing concept: In this era of globalization, modern manufacturing enterprises are continuously facing tough market competitions. The remarkable industrial growth in past few decades has completely revolutionized their traditional manufacturing strategies, giving emergence to the modern concepts of lean, agile, and nowadays, leagile manufacturing. These new strategies enable the enterprises to survive in the turbulent environment of violent competitions laid down by their competitors. The requirement of faster delivery within due date, the ability of being flexible to satisfy fluctuating market demand have been the prime motivations that has provoked manufacturing enterprises to look for the available best alternatives, and implement it in their daily manufacturing practices. This led to the development of a new concept of leagality, which is an integration of lean and agile principles. Agile manufacturing is adopted where demand is volatile and lean manufacturing is adopted where there is a stable demand. However, in some situations it is advisable to utilize a different paradigm on either side of the material flow decoupling point to enable a total supply chain strategy. This approach is termed as leagile paradigm (Mason-Jones, 2000a, b).\n2. State of art and problem definition: Naylor et al. (1999) compared lean and agile paradigm highlighting the similarities and differences as agile manufacturing is best suited to satisfy a fluctuating demand and lean manufacturing requires a level schedule. They combined both the paradigm within a total supply chain strategy particularly considering market knowledge and positioning of the decoupling point. Mason-Jones et al. (2000a) integrated lean production and agile supply in the total supply chain and supplemented by information enrichment which required evaluation of the total performance metric and development of a route map. Adopting such an approach to supply chain re-engineering ensured that customer service levels were improved at the same time lead times and costs were greatly reduced. Mason-Jones et al. (2000b) classified supply chain design and operations according to the lean, agile and leagile paradigms that enabled to match the supply chain type according to marketplace necessity. Herer et al. (2002) introduced transshipments, which represented a common practice in multi-location inventory systems involving monitored movement of stock between locations at the same level of the supply chain and established a model, how transshipments could be used to enhance both agility and leanness. Stratton and Warburton (2003) explored the role of inventory and capacity in accommodating the lean as well as agile supply chain variation and identified how Theory of Inventive Problem Solving (TRIZ) separation principles and Theory of Constraints (TOC) tools might be combined in the integrated development of responsive and efficient supply chains. Prince and Kay (2003) described the circumstances on which, manufacturing organizations required an integrated agile and lean characteristic in their supply chain. They also described the development of the virtual group (VG) concept, which was the application of virtual cells to functional layouts. VGs enabled the appropriate application of lean and agile concepts to different stages of production within a factory. The identification of VGs was achieved through enhanced production flow analysis. Bruce et al. (2004) discussed the characteristics of the textiles and apparel industry and identified the perspectives of leanness, agility and leagility within existing supply chain fiction, which offered as solutions to achieving quick response and reduced lead times.\n3. Fuzzy preliminaries: Fuzzy logic is basically a multi-value logic which permits intermediate values to be defined between conventional ones like true/false, low/high, good/bad, etc. It is an established fact that, as the complexities surrounding a system increase, making a precise statement about the state of the system becomes very difficult.\n4. Leagility evaluation: a conceptual framework: Leagile supply chain is a new conception that proposed in the context of diversified and personalized customer demands; it can quickly response fast changing demands, and modularize all kinds of personalized products as much as possible (Zhang et al., 2012). Successful implication of leagility-driven supply chain requires its performance to be assessed.\n5. Case application: This evaluation framework has been case studied in a famous locomotive part manufacturing organization at eastern part of India. The study presents the application of the conceptual model of leagility embedded with lean and agile principles. A fuzzy logic approach has been used for the evaluation of leagility in supply chains. It is aimed to compute the performance of supply chain using both lean and agile concepts (as leagility supply chains) using a fuzzy logic approach. General hierarchy model for leagility evaluation has been furnished in Table I. Definitions of linguistic variables for assignment of priority weight and performance ratings have been shown in Table II, which is basically a nine-member linguistic-term set. Linguistic evaluation information needs to be converted into appropriate fuzzy numbers. A fuzzy scale (Table II) consisting of GTFNs has been explored to convert DMs linguistic evaluation into fuzzy numbers. An expert group consists of ten DMs has been constructed by the top management. The expert group has been instructed to utilize aforesaid linguistic scale toward assigning appropriateness rating against each of the third level leagile criterions; priority weights against individual leagile capabilities (at first level), attributes (at second level) as well as criterions (at third level). Priority weight of leagile criterions (in linguistic term) assigned by the DMs has been shown in Table III. Table IV represents appropriateness rating (in linguistic terms) of leagile criterions assigned by the DMs. Linguistic priority weight of leagile attributes (at second level) as well as leagile enablers (at first level) given by DMs have been shown in Tables V and VI, respectively. Linguistic data have been converted into appropriate fuzzy numbers as depicted in Table II. The \"Aggregated average rule\" has been utilized to accumulate DMs opinion. Table VII represents aggregated fuzzy priority weight as well as aggregated fuzzy rating of individual leagile criterions. Aggregated fuzzy priority weight and computed fuzzy rating (computed using Equation (26)) of leagile attributes have been given in Table VIII. Aggregated fuzzy priority weight and computed fuzzy rating (computed using Equation (27)) of leagile enablers have been tabulated in Table IX. The FOPI thus becomes (Equation (28)): (0.399, 0.554, 1.170, 1.580, 1.000).\n6. Managerial and research implications: The paradigm combining lean and agile principles invites a new management framework. The leagile framework allows firms and supply networks to configure an appropriate profile to face successfully the market volatility and fight to secure competitive advantages. It is particularly important for the firms and enterprises exploiting markets in terms of cost, quality, response time and service level where the consumers seek for better responsiveness to meet unpredictable ever-changing demands.\n7. Conclusions: Improved supply chain agility and leanness imply that a supply chain is capable of quickly responding to variations in customer demand with cost and waste reduction. Leanness in a supply chain maximizes profits through cost reduction, while agility maximizes profit through providing exactly what the customer requires. This paper aimed to present an integrated fuzzy-based performance appraisement module in an organizational leagile supply chain.\n", "role": "user" }, { "content": "Purpose: In today's ever-changing global business environment, successful survival of manufacturing firms/production units depends on the extent of fulfillment of dynamic customers' demands. Appropriate supply chain strategy is of vital concern in this context. Lean principles correspond to zero inventory level; whereas, agile concepts motivate safety inventory to face and withstand in turbulent market conditions. The leagile paradigm is gaining prime importance in the contemporary scenario which includes salient features of both leanness and agility. While lean strategy affords markets with predictable demand, low variety and long product life cycle; agility performs best in a volatile environment with high variety, mass-customization and short product life cycle. Successful implementation of leagile concept requires evaluation of the total performance metric and development of a route map for integrating lean production and agile supply in the total supply chain. To this end, the purpose of this paper is to propose a leagility evaluation framework using fuzzy logic.\nDesign/methodology/approach: A structured framework consisting of leagile capabilities/attributes as well as criterions has been explored to assess an overall leagility index, for a case enterprise and the data, obtained thereof, has been analyzed. Future opportunities toward improving leagility degree have been identified as well. This paper proposes a Fuzzy Overall Performance Index to assess the combined agility and leanness measure (leagility) of the organizational supply chain.\nFindings: The proposed method has been found fruitful from managerial implication viewpoint.\nOriginality/value: This paper aimed to present an integrated fuzzy-based performance appraisement module in an organizational leagile supply chain. This evaluation module helps to assess existing organizational leagility degree; it can be considered as a ready reference to compare performance of different leagile organization (running under similar supply chain architecture) and to benchmark candidate leagile enterprises; so that best practices can be transmitted to the less-performing organizations. Moreover, there is scope to identify ill-performing areas (barriers of leagility) which require special managerial attention for future improvement.\n", "role": "assistant" } ]
A bibliographic study on big data: concepts, trends and challenges
[ "Big data", "Citation and co-citation analysis" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Due to the technological advancements in twenty-first century, the amount of data generated is doubling each year. Meng and Ci (2013) argued that technologies such as cloud computing, Internet of Things (IoT), and social networking mark the beginning of \"big data\" era. This data are no more measured in terms of gigabyte or terabyte (TB), but in petabyte (PB), exabyte (EB) and zettabyte (ZB) (where, 1PB=210 TB, 1EB=210 PB, and 1ZB=210 EB). In reference to this, Gantz and Reinsel (2012) forecasted that by the end of 2020, 40 ZB of data will be generated.\n2. Methodology for reviewing the literature on \"big data\": The analysis was performed in two stages:\n3. Review of big data: In this section, we report on the literature review by discussing the definitions, concepts, and characteristics of big data.\n4. Big data: citation and co-citation analysis: In this section, we identify the influential scientific contributions in the field of big data. This section is divided in two sub-sections. In the first, we discuss the results of our citation analysis, whereas in the second section we present and comment on the results of the co-citation analysis.\n5. Current and future trends in big data: Our results reveal that the concept of big data has evolved over the years. Due to the significant advances in technology, huge amount of data is being generated every year. This proliferated data, termed as \"big data\", have become a hot topic for discussion by capturing the attention of academics, professionals, and governments all around the world. Attempts were made to answer the question \"What is big data?\" To date, there is no clear or universally accepted definition of big data (Mayer-Schonberger and Cukier, 2013). As the name itself suggest, \"size\" was conceived as its main characteristic. But later on, Gartner, Inc. observed that size may not be the only criteria to adjudge a \"data\" as \"big data\". They proposed 3Vs; volume, variety, and velocity as the new characteristics to describe big data. Another characteristic, veracity which is widely accepted in literature and is commonly known as the fourth V was added by IBM. In addition, value and variability (and complexity) came up as the fifth and sixth characteristics which were introduced by Oracle and SAS, respectively.\n6. Conclusions: Drawing broadly on bibliometric and citation and co-citation analysis, we presented an extensive review of literature on big data over the period of five years (2011-2015). We offered insights regarding the contributions of scientific journals towards advancing big data-related research and the contributions of researchers to the emerging field of big data. To our knowledge, this is the first study attempting to classify highly cited and co-cited works related to this field. However in other fields these tools have attracted growing attentions. The current research of ours has made significant contributions towards the theory by identifying fertile research areas. Although we have only cited potential applications from operations management field, the applications goes far beyond management fields and other interdisciplinary areas where we have not focussed. It is important for companies to adopt big data analytics in an attempt to comprehend the trend in customer behaviour and provide them improved and customized services. Despite the limitations, we believe that our study provides food for thought and encouragement for researchers to further investigate the field of big data.\n", "role": "user" }, { "content": "Purpose: The purpose of paper is twofold. First, it provides a consolidated overview of the existing literature on \"big data\" and second, it presents the current trends and opens up various future directions for researchers who wish to explore and contribute in this rapidly evolving field.\nDesign/methodology/approach: To achieve the objective of this study, the bibliographic and network techniques of citation and co-citation analysis was adopted. This analysis involved an assessment of 57 articles published over a period of five years (2011-2015) in ten selected journals.\nFindings: The findings reveal that the number of articles devoted to the study of \"big data\" has increased rapidly in recent years. Moreover, the study identifies some of the most influential articles of this area. Finally, the paper highlights the new trends and discusses the challenges associated with big data.\nResearch limitations/implications: This study focusses only on big data concepts, trends, and challenges and excludes research on its analytics. Thus, researchers may explore and extend this area of research.\nOriginality/value: To the knowledge of the authors, this is the first study to review the literature on big data by using citation and co-citation analysis.\n", "role": "assistant" } ]
Asset risk and leverage under information asymmetry
[ "Cost of debt", "Leverage", "Information asymmetry", "Capital structure", "Asset risk" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The tradeoff theory of capital structure implies a negative association between a firm's financial leverage and the riskiness of its assets. Bradley et al. (1984) argue that asset risk increases the present value of leverage-related costs (i.e. bankruptcy costs, agency costs of debt and loss of non-debt tax shields), thus decreasing the firm's optimal debt level. Castanias (1983) provides evidence of a negative relation between bankruptcy risk and several leverage indicators. Furthermore, Bessler et al. (2013) note that the increased proportion of zero-leverage firms in the economy can be explained by an increase in asset risk. In contrast, Kale et al. (1991) argue that, when personal taxes are considered, the relation can become positive at higher levels of risk. Claessens et al. (2005) substantiate this positive relation in a cross-country analysis. More recently, Choi and Richardson (2016) report a negative relation between asset risk and leverage. However, half of the firms exhibit a large dispersion in leverage, while their asset risk is similar. For the other half, leverage is similar, but asset risk varies considerably across firms.\n2. Modeling assumptions: The model involves a game between an informed firm and uninformed lenders. The firm is endowed with equity capital, which is set to 1 without loss of generality. To supplement its equity capital, the firm can borrow w - 1 from lenders. Hence, the firm's balance sheet size is w. Abusing notations, I will subsequently refer to w as leverage. The firm's resources are invested in a risky asset, whose returns depend on the states of nature. Uncertainty is represented by the existence of two states in the end period. Each state may occur with the same probability 0.5. Asset returns are assumed to follow a simple binomial process defined by:\n3. Equilibrium leverage and cost of debt: Given that the firm defaults in the bad state, the optimal leverage maximizes the expected utility:\n4. Comparison with the case of perfect information: Under information asymmetry, the firm has to bear the cost of signaling its private (asset risk) information. To illustrate this point, consider the firm with minimum risk s*. If lenders are informed about asset risk, the firm can increase its leverage to any arbitrarily high level w > w*. For a given leverage w, lenders must charge the following interest rate, including principal, which ensures that they receive on average the risk-free rate:\n5. Conclusion: Under asymmetric information, higher asset risk can be associated with higher leverage. This result is opposite to the standard prediction of the tradeoff theory of capital structure, which assumes that lenders and borrowers have the same information. However, it is consistent with empirical findings presented by Claessens et al. (2005). The reason for this positive relation is that lower leverage is used for signaling a lower asset risk and reducing the firm's cost of debt. As a result, leverage is lower than under perfect information. This paper provides a rationale for the conservative debt ratios of low risk firms, which has been a longstanding puzzle that Molina (2005) puts down to statistical measurement problems arising from the endogeneity of leverage. In this paper, leverage is endogenously constrained to induce a higher sensitivity to the firm's asset risk. We thus offer an alternative explanation to the underleverage phenomenon that differs from those based on management risk aversion and moral hazard (Strebulaev and Yang, 2013).\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to investigate whether higher asset risk can be associated with higher leverage and to provide a rationale for the relatively low debt ratios displayed by many firms.\nDesign/methodology/approach: A game is modeled between an informed firm and uninformed lenders. The firm knows the risk of its assets, while lenders only know the minimum level of risk. The author solves for the signaling equilibrium and derive explicit formulas for the firm's cost of debt and optimal leverage.\nFindings: In contrast to the tradeoff theory of capital structure, it is found that asset risk and leverage are positively (instead of negatively) related. Furthermore, leverage is lower than when lenders are informed about the firm's risk. These results are illustrated with a numerical example.\nPractical implications: Low-risk firms can choose a lower leverage to signal their lower risk and reduce their cost of debt. High-risk firms may prefer to pay higher interest rates and use higher leverage.\nOriginality/value: The paper is able to explain why some firms use surprisingly low leverage and do not appear to take advantage of their debt tax shields.\n", "role": "assistant" } ]
Transformational leadership and employee voice: a model of proactive motivation
[ "Transformational leadership", "Proactive personality", "Employee voice", "Proactive motivation" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Over the past few decades, much research focus has been placed on understanding employee voice because it is related to positive outcomes for the organization (Mackenzie et al., 2011). Employee voice refers to the discretionary communication of ideas and suggestions about work-related issues with the intent to improve organizational or unit functioning (Morrison, 2011). However, engaging voice can be risky for employees because people who make suggestions to change are confronting the status quo (Burris, 2012; Milliken et al., 2003). Since leaders are the major target of employee voice, research has highlighted the role of transformational leadership, which inspires subordinates to work for the good of the organization by motivating them through communication of the vision and addressing their higher-order needs (Bass, 1985).\nLiterature review and hypotheses development: Parker and her colleagues (2010) developed the model of proactive motivation to conceptualize the joint effects of situational and dispositional factors on individual proactive outcomes. They have identified three motivational states as mediating pathways (i.e. \"reason to,\" \"can do\" and \"energized to\") that prompt self-initiated effort to bring about change. In the following section, we will discuss how transformational leadership facilitates employee voice through each motivational state.\nMethod: Data and sample\nResults: Descriptive statistics and measurement model\nDiscussion: Theoretical and practical contributions\n", "role": "user" }, { "content": "Purpose: Drawing on the model of proactive motivation, the purpose of this paper is to examine how transformational leadership influences followers' voice behavior through three proactive motivation states, namely, \"reason to,\" \"can do\" and \"energized to.\" It also examines the moderating role of followers' proactive personality in the relationship between transformational leadership and employee voice.\nDesign/methodology/approach: The online survey was distributed through Qualtrics using a two-wave design. In total, 1,454 participants completed the survey at Time 1, of those 447 also completed the survey at Time 2.\nFindings: Transformational leadership influences employee voice via followers' promotion focus, role-breadth self-efficacy and affective commitment. Followers' proactive personality attenuates the impact of transformational leadership on voice, supporting the substitute for leadership hypothesis.\nResearch limitations/implications: Self-reported data are the main limitation of the present study. Other limitations include treating employee voice as a unidimensional construct and oversimplifying the impact of positive affect on voice.\nPractical implications: The present study suggests that training managers to demonstrate more transformational leadership behavior, enhancing employees' proactive motivation and hiring proactive individuals are strategies to facilitate employee voice.\nOriginality/value: The present study contributes to a better understanding of employee voice from a proactive motivation perspective. It also demonstrates that followers' proactive personality is important \"boundary condition\" to transformational leadership.\n", "role": "assistant" } ]
Essential features of a building services standard method of measurement in Malaysia
[ "Malaysia", "Standards", "Construction management", "Cost estimates", "Building services", "Quantity surveying" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Building services is a specialised area of the construction industry in terms of the nature of work, scope, technicality and the parties involved. The design and installations are different from the building structure and finishes and some important design information are often not received in good time or produced until the installation stage. Meanwhile, there is need to have a budget for building services to act as a guide for determining a fair price prior to awarding a contract (Swaffield and Pasquire, 2000). However, reliable estimation of costs of building services remains the most difficult to achieve at the early stage of building design process (Aibinu et al., 2015). Industry stakeholders expect the cost of building services to be effectively managed because of the increasing complexity of modern buildings, which has tremendously increased the significance of building services and their relative cost to the total cost of building projects (Yong et al., 2004; CIDB, 2009a, b; Kumar, 2009). The increase in the cost and complexity of building services systems has also increased the range of options available to designers (Churcher, 2009; Cartlidge, 2011; Yik et al., 2013). As a result, building services costs have gone up from the traditional 10-30 per cent of total building cost to between 10 and 70 per cent of the total cost and unless this increase is effectively managed, it will expose clients to a significant cost risk (Langdon, 2010; McCaffrey, 2011). Therefore, detail measurement and quantification of building services is required to improve both clients' satisfaction and the reliability of building services early budget estimate (Babalola, 2012; Ashworth et al., 2013). This may require the input of specialist building services quantity surveyors (RICS, 2000; McCaffrey, 2011).\n2. Literature review: 2.1. Standard method measurements\n3. Research methods: 3.1. Data collection\n4. Research findings: 4.1. Need to develop a building services SMM\n5. Discussion: Table II shows that the majority of the respondents (62 per cent) are from consulting firms. In addition, most respondents (93 per cent) have a qualification above diploma level, with registered quantity surveyors accounting for about 45 per cent of the sample population, while none of the respondents had fewer than five years' experience in the construction and measurement of building works. They have been using SMMs for BoQs and post-contract cost management (consulting), the interpretation of BoQs for estimating and tender adjudication (contracting) and interpretation and demonstration to teach students (academics). Therefore, their judgement on the features of an SMM and descriptors of building services components can be considered to be valid and indicative of the opinions of QS in the industry. The results of the survey also show the need to develop building services SMM that can reflect local best practices and procedures and comply with international best practices.\n6. Conclusions: The study investigated the need to develop an SMM for building services. The main aim was to identify the essential features of building services SMM suitable for the Malaysian construction industry. The procedures for developing locally based SMMs in other countries as well as selected SMMs from other countries were reviewed. In total, 33 variables considered to be essential features of an SMM were identified through a literature review. These factors were grouped into three broad categories: the desired structure of an SMM, item descriptors and the determination of measurement units. The COVs computed for the variables were relatively low for the selected features of the SMM under each category. Kendall's concordance test indicated that there is good agreement among the three groups (i.e. quantity surveyors in clients' contracting and consulting organisations) and the kh2 test shows that agreement among the groups on the ranking of features of SMM are statistically significant and not as a result of mere coincidence.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to examine the need to develop building service standard method of measurement (BSSMM) that would provide a home ground advantage and a framework for managing the cost of building services by quantity surveyors.\nDesign/methodology/approach: A review of previous research on the rules of measurement was carried out. This exercise provides the basis for questionnaire design which was administered to quantity surveyors in Malaysia construction industry.\nFindings: The results indicated the need to develop BSSMM. In total, 23 variables were identified as essential features of such standard method of measurement. These variables were grouped into three categories and a priority ranking of them was carried out. The analysis revealed a strong agreement among quantity surveyors on the priority ranking of the essential features of the BSSMM.\nResearch limitations/implications: Although, the research would assist industry stakeholders in developing BSSMM that reflects local and international best practices, there is need to further investigate how the rules of the BSSMM could be aligned with construction industry classification systems.\nPractical implications: The availability of information and computer technology (ICT) tools has transformed the traditional practices in the industry and offered values to quantity surveyors, but if quantity surveyors' tools cannot be consolidated and incorporated into emerging ICT applications, then QS collaboration will be difficult, and the requirements for electronic data exchange will be impractical. Therefore, this type of standard method of measurement will facilitate cross-referencing and coordination between BoQ and other documents such as drawings, specifications and tender produced by other project participants and thereby lay efficient foundation for effective collaboration of quantity surveyors in BIM model. In addition, the BSSMM will not only comply with local industry practices, it will also be internationally compatible.\nOriginality/value: Adoption of a structured format based on BSSMM to present building services cost information is essential in view of the increase in the value and complexity of building services in modern buildings. However, there is need to enhance quantity surveyors' knowledge in the technology and cost management of building services so as to ensure the reliability of cost advice provided to clients. This goal would be partly achieved through education and training of Quantity Surveyors/Cost Consultants in the interpretation and use of the rules of an SMM.\n", "role": "assistant" } ]
Benchmarking publication metrics for Indian business researchers: Exploring the role of collaboration
[ "Benchmarking", "Management education" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Research is an infinite expedition in academia. Not all questions can be answered at once; as such, the process is continuous with several researchers consistently adding value to existing knowledge. Research and its quality is usually measured by publication metrics, the primary objective of which is to motivate researchers to do quality research. Assessing research scholarship of individual academics is difficult (Tasopoulou and Tsiotras, 2017). A range of metrics is being used to evaluate the quality of work at educational institutions across the world (Kashiramka et al., 2019; Reddy et al., 2016). These metrics are used not only by the individual researcher but also the institution for academic promotions and research grants, and the government for funding, subsidies, grants and jobs. There lacks uniformity of publication policies followed across different countries, which makes it difficult for researchers to comply with the needs and demands of changing policies (Eacott, 2016; Dumay et al., 2018).\n2. Review of literature: Research publication has gathered momentum in the recent past across every discipline and countries in the world. There are multiple parameters to measure and evaluate the performance of researchers, while the methodology adopted and proposed by earlier studies are under scanner and are often challenged. Khosravi and Chavan (2012) argued that the current measures of research were not sufficient for providing a comprehensive and logical scoring.\n3. Research design: This study was conducted using secondary data from Scopus website. Earlier studies have expressed the importance of Scopus database and claimed it to be better than Web of Science due to its comprehensiveness (Montoya et al., 2018; Mongeon and Paul-Hus, 2016). As the study aims to look at publications undertaken in the area of Business and Management, Scopus website was accessed and advanced filters were used to draw relevant research papers. Scopus lists every journal under appropriate titles like Business, Economics, Computer Studies, Arts, etc. which helps in identifying appropriate journals.\n4. Results and discussion: The present study extracted raw data from Scopus and processed it to understand trends in publications by various Indian colleges/universities in the area of business and management. This section discusses the results and the inferences drawn in order to understand the nuances of choices made by Indian researchers in both public and private colleges/universities. The section discusses four major attributes of publications: rankings of colleges, rankings of journals, rankings of authors (overall publications) and rankings of publications based on citations (Reddy et al., 2016; Basu et al., 2016).\n5. Conclusions: The study clearly indicates the current status of management research in India - there is pressure to undertake research, publish and enhance the reputation of the schools to draw in high quality faculty and students. However, India lacks the ecosystem for research with the means to spawn research ideas, gather accurate data and get the results validated and published being inadequate. On one hand, Indian B-schools are primarily teaching institutions where the teaching load is not allocated after factoring in time for research, and, on the other hand, the tenure system of these schools measures the quantum of research by setting a norm of publish or perish. This has resulted in poor count and quality of published papers in reputable journals. This study, which is a trend-based analysis to understand the extent and role of collaboration between management researchers in India, addresses gaps in the literature by extracting data from Scopus database and providing evidence of the current situation. Secondary data on publications by Indian authors with Indian co-authors were taken and descriptive analysis was performed. The study concludes that the rate of collaboration with researchers from the home country is low in India for top rated publications. This implies that management researchers with native collaborations are unable to target journals with high IF. The majority of publications by management researchers is listed in only Scopus indexed journals; just a handful of publications feature in A and A* journals. The tradeoff is achieved by most authors by publishing in C, followed by B category journals. While the country's oldest institutions (including IITs, IIMs and some private colleges) feature in the list, the trends of their publishing are almost similar to other private counterparts. This has also resulted in lesser citations. Policy makers and educational organizations should provide a supportive research environment that begets a performance management system that encourages international collaborations through association with global counterparts to share ideas, resources, knowledge and expertise. Other measures may include funding for travel and research, providing latest databases, encouragement to attend lectures by researchers from across the world, etc. If researchers are assigned well-defined targets for publication complemented with requisite infrastructure and accommodating work schedules, there will be lesser pressure to meet these targets and fewer incidences of unethical publication practices. This study and results thereof would help educational researchers to dig deep into the root cause of why and how collaboration within and outside the country impacts quality of publication. Results would also encourage ranking agencies to award suitable rankings to colleges promoting collaboration within and outside the country. With UGC making and revising lists of acceptable publications for Indian academics, it is important for them to design a matrix to define and universalize quality for publication as well. This study also enables researchers, journals and colleges to understand their rankings in terms of associations within the Indian sub-continent or in India. Last but not the least, international journals must consider publishing authors from Asian countries to get a global perspective on a given subject.\n6. Limitations and Future scope of research: The study presents a discussion on trends in publications by Indian authors with other Indian researchers. The study could have been further validated by data collected from these researchers in order to understand their motivation toward collaborating with Indian researchers. The study could have used a comparative chart for understanding trends in publications by authors from the home country and other countries. A similar study could also be carried out in other countries with their respective journal indices. The study also needs further exploratory analysis in understanding which factors would have led to such a trend thereby giving a wider perspective. Nevertheless, this can be discussed in future by other researchers.\n", "role": "user" }, { "content": "Purpose: There is no particularly efficient way to measure research output, but effectual assessment of research output is necessary to motivate and encourage researchers to enhance their research performance and disseminate knowledge. The purpose of this paper is to highlight the exacerbating pressure on Indian academics to deliver incessantly in terms of research output and identify an agreeable way to increase research output. The metric explores trends in management publications by Indian academics and presents an overview of collaborative practices by Indian management researchers that could shine a light on the trends of collaborative publishing future.\nDesign/methodology/approach: The study focuses on publications in the area of Business and Management. Scopus was used with advanced filters to draw relevant research papers. Data were then sorted and filtered on the basis of quality determined through Australian Business Deans Council rankings and diverse bibliometric information to understand the pattern of research output of Indian academics.\nFindings: The research found that the rate of collaboration with researchers from the home country is low for top rated publications. Majority of publications were listed in Scopus indexed journals, whereas a handful featured in A and A* journals. A and A* journals were predominantly co-authored with academics from universities outside the country. Tradeoff was achieved by majority of authors by getting published in B, followed by C category journals to achieve research outputs.\nResearch limitations/implications: The study is limited to publications in the area of Business and Management and may be extended to other disciplines such as economics, engineering, law, medical sciences, etc., to understand publishing trends at universities in the country. The study can also be conducted in understanding a similar dilemma with academics at other countries that are evolving in research culture.\nPractical implications: The study would help management researchers to dig deep into the root cause and understand why and how collaborations within and outside the country impact the quality of publications. The results would further encourage ranking agencies to award suitable grades to colleges that promote collaboration within the country as well as international collaboration. The study also conducts a benchmarking exercise of the institutions in the country that would be useful for researchers, journals and colleges.\nOriginality/value: While earlier studies have highlighted the importance of foreign collaborations in academic publishing, there are a handful of studies that have focused on the role of collaboration within the home country. The practice of Indian authors collaborating with Indian counterparts at other universities would help understand the expertise of researchers at different universities and encourage the collaboration process and quality of publications in the country.\n", "role": "assistant" } ]
Controlling the flood: when more of a dose is less
[ "Information", "Metaphor", "Information load", "Dosage" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Information has always been a source of power, but it is now increasingly a source of confusion. In every sphere of modern life, the chronic condition is a surfeit of information, poorly integrated or lost somewhere in the system (Wilensky, 1968, p. 331).\n1. The flood: We have been contemplating the growth of information in the modern age since its very beginnings. Information technologists who focus on information retrieval systems essentially assumed that more information was always good (Jansen and Rieh, 2010). So, normally we see big data, and even larger opportunities, but increasingly we sense a flood that threatens to engulf us and our institutions (Gleick, 2011).\n2. Information and information load: Before exploring the flood more completely we must define the central concepts of information and information load. Needless to say there has been considerable controversy in the literature about how information should be defined and where it stands in relation to other concepts (e.g. data, knowledge, wisdom) (e.g. Fricke, 2009; Capurro and Hjorland, 2002; Floridi, 2010; Cornelius, 2002; Gleick, 2011). Put simply, information can be defined as any stimuli we register or recognize in the environment around us (Miller, 1969). In this view information involves the recognition of patterns in our surrounding basic matter/energy flows (Bates, 2006; Case, 2007; Hjorland, 2007; Farace et al., 1977). It focusses on the nature of an individual's cognitive processes, arguing that they shape what we consider to be information, what we will perceive, and how we will perceive it (Miller, 1969). But, there is also a sense that information is what you use to develop a higher level of comfort, perhaps even more of a feeling of familiarity with a situation. The more confident and sure you are about something the less uncertain it is (Farace et al., 1977). Thus, information can also be viewed as the number (and perhaps kind) of messages needed to overcome uncertainty (Krippendorf, 1986) and uncertainty reduction has often been cast as a \"direct and linear function of an objective quantity of information\" (Afifi and Afifi, 2009, p. 1); however, \" [...] it is less common to but equally important to realize that a surfeit of information can confuse and overwhelm us\" (Babrow et al., 1998, p. 16), exactly the issues raised by the flood.\n3. Dosage and metaphor: Fundamentally, in our day-to-day interactions with others, we must constantly confront issues of dosage. Dosage is derived from a Greek word which literally means the act of giving. In modern usage it is most commonly thought of as the administration of a therapeutic agent in prescribed amounts. Metaphors become a way of seeing and thinking about life; they provide a way of understanding one thing by appeal to another. They also reflect central issues of a particular time and place. The dosage metaphor offers an encompassing way of addressing the flood. It focusses on the issue of how much should I give and/or take, how often, for how long, with an underlying appreciation for the darker side of effects (e.g. overdoses, allergic reactions, contraindications) as well. Dosage as a metaphor focuses our attention on such fundamental, pragmatic communication issues related to controlling the flood as: amount, frequency, sequencing, delivery system, interaction with what other agents, and contraindications.\n4. Coping mechanisms: In this section we will discuss four general strategies for coping with the flood: escape, attention, delegation, and creative destruction. In doing this we will systematically articulate for each how it relates to the six elements of the dosage metaphor we have just discussed (see Table I). In many ways this is a natural ordering, with escape the least constructive strategy and delegation and creative destruction both easing ones load and providing possibly constructive, rational approaches for dealing with it.\n5. Discussion: As we have seen the dosage metaphor offers us a rich way of conceiving of the flood and can be used to better understand coping mechanisms associated with it. We conclude this essay by focussing on the central role of matching as a guiding principle for thinking about how we should cope with this central problem of the information age. We then discuss the difference between tactics that are efficient and those that are effective. Finally, we suggest a minimalist approach that offers the hope of clarity in acting in an age that increasingly overwhelms us.\n6. Conclusion: There has been a communication metamyth that more is necessarily better. As a result we do not have measured approaches to problems. We do not know when to stop, and perhaps like a doctor who gives someone a drug because they expect one, in our consulting roles we are often pressed to do something. What is our equivalent of the \"effective dose\" concept in medicine - the smallest amount of a substance needed to produce a measurable effect? The central problem of the information age is adjusting the dosage, controlling the flood by a renewed focus on simplicity and forgetting.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to explore the flood of information that is inundating us. This flood makes it increasingly difficult to make sense of the world and arrive at the correct interpretation of events.\nDesign/methodology/approach: Classic narrative literature review applying the dosage metaphor to the growing problem of information overload.\nFindings: The seven elements of dosage - amount, frequency, sequencing, delivery systems, contraindications, interactions, and dysfunctions - are used to discuss four major coping mechanisms - escape, attention, delegation, and creative destruction - for dealing with the flood. Each of the coping mechanisms has different entailments for the dosage elements.\nOriginality/value: This essay develops a guiding principle for thinking about how we should cope with this central problem of the information age: suggesting a minimalist approach that offers the hope of clarity in acting in an age that increasingly overwhelms us.\n", "role": "assistant" } ]
An approach to improving logistical performance with cross-unit benchmarking
[ "Benchmarking", "Performance measurement (quality)", "Inventory management" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Benchmarking has established its position as an important approach in improving business processes, as can be seen in the wide body of academic literature published during the last 15 years (Yasin, 2002; Dattakumar, 2003). Much of the research is strongly practice based, which makes benchmarking, one can justifiably claim, an applied discipline (Camp, 1989) that is well-grounded in managerial practice. Its effectiveness has been proven in large companies, such as Xerox, which was the first to implement it (Camp, 1989), as well as in small- and medium-sized enterprises (Cassell et al., 2001).Theoretically, the concept of benchmarking has evolved from the early days of reverse engineering through the core of performance, process, and strategic benchmarking towards such forms as global benchmarking, competence benchmarking, and network benchmarking (Kyro, 2003). While, as a maturing discipline, benchmarking research tends to search for new innovations and broaden its scope, its hard essence is still in the \"process of identifying the highest standards of excellence for products, services, or processes, and then making improvements to reach those standards - commonly called best practices\" (Bhutta and Huq, 1999). Benchmarking and best practice-based improvement are closely related to two other disciplines popular in current academic discussion, namely organizational learning (Argyris, 1993; Senge, 1990) and change management (Dawson, 1994; Kotter, 1996). Fundamentally, benchmarking is about learning (from best practices) and change (to better practices). When learning is systematic and involves the majority of stakeholders in activities such as internal benchmarking, it is highly probable that the changes emanating from such learning will be more readily implemented within the organization (Hyland and Beckett, 2002). Benchmarking can be seen as a tool for organizational learning (or even a research method (Kyro, 2004)), and learning fosters change by creating commitment implementing the changes as the participants gradually see the benefits during the learning process.Especially, in large multinational companies, internal benchmarking has been found to have the several change catalyzing benefits. It can be used to avoid some barriers to change that have been observed when using ambitious external benchmarking processes (Hyland and Beckett, 2002). Also, when benchmarking internally, it is easier to ensure that the processes are comparable enough in terms of operating environments. Even though companies can get hard data on explicit practices, much of the knowledge on business processes is tacit in character and thus tied up to the minds of individuals, and thus hard to access from totally external sources.There are some principles that are important to make a learning and change process successful (Hyland and Beckett, 2002). The process must have a clear focus on outcomes: improving the bottom line in the short term, and improving competitiveness in the long run. The data used has to be clear and understandable: to stimulate learning, the data needs to be converted to knowledge and couched in the language of the organization. The data must be collected in a meaningful way that are understood and accepted by the people involved. These principles pose rather strict limitations to the complexity of the benchmarking data and the process. They may tip the balance in favour of internal benchmarking, even though external sources often provide more innovative practices to compare with. To get the most benefits of the characteristics of internal and external benchmarking, an analysis across several business units of a company often gives a good combination of variation of practices and performance, as well as enough similarity in the operating environment and access to the relevant data sources, including tacit and confidential business knowledge.This study applies a cross-unit performance benchmarking approach to find out the improvement potential and possible actions for a regional construction product distributor operating six business units. The object of the benchmarking was selected to be the logistical process of the business units, or more specifically, inventory control efficiency, which is one of the key success factors of a distributor company. Since the company had no earlier experience in benchmarking, it was decided to begin with performance benchmarking, which best fulfils the above-mentioned principles of outcome focus, as well as the clarity and understanding of data. It is also the most potential way to provide quick wins and create belief in the necessity of change, which are important early phases of a change process (Kotter, 1996; Niemi et al., 2007).This paper presents a stepwise benchmarking process, where the use of the basic inventory control practices is complemented with benchmarking analysis among business units of same kind, but not entirely similar. The viewpoint is that of a supply chain specialist, the main objective of the paper is to present how the analyses should be carried out taking into account the suggestions presented in Niemi et al. (2007). In the following sections, we describe the benchmarking procedure tailored for the specific activity, inventory control, in detail.\nApplication environment as basis for analysis: The case company is an independent distributor of a focused range of products, serving wholesale buyers, industrial customers and consumers. It has a countrywide network of regional distributors, which are independent profit centers, six depots of which were considered to represent well enough the diverseness of practices and policies. The total annual turnover of the case depots is roughly EUR100 M. The regions and regional competitive situations are very different, and therefore the product mixes of the distributors differ from each other quite a lot. The decisions on product variety are basically made locally, and that is why some of the distributors are strictly focused on a core product variety, some serve their customers with a larger variety of additional, related products. Inventory management is in practice decentralized to salesmen, so that a salesman is responsible for replenishments of a named group of products. The culture of the company is very selling-oriented, and the whole personnel of the depots participate in and consider selling as their main task, and inventory management as their secondary task.Efficient inventory management has a major impact on profitability in wholesaling and retailing businesses, which are characterized by a wide product assortment and narrow contribution margins. In managerial practice, the management of product inventories often relies on rather straightforward and simple-to-use models, such as economic order quantity, ABC-analysis, and statistical safety stock algorithms (Silver et al., 1998). These models are based on quite restrictive initial assumptions in terms of, e.g. demand patterns and cost-parameters. At the same time, the relaxation of the assumptions leads to an increasing complexity of models, which may undermine their practicality. This presents one of the most challenging questions in management research: how to provide tools that improve the quality of decisions without complicating the analysis too much.Since inventory management is regarded as a secondary task by the salesmen, and therefore run by simple rules of thumb, it is obvious and understandable that skills, resources and motivation to implement sophisticated inventory decision-making practices are missing. The very discrete and fluctuating demand of individual items, and the useful practice of combining orders of several items from the same supplier make the inventory parameter calculations very challenging even for a skilled practitioner. The parameters that would be needed even for the simplest models are difficult to define reliably. However, also in this company, it is seen necessary to improve the inventory decision making. There is obviously need for a different kind of approach, one that does not rely on arbitrary parameters or too complex algorithms, but which reveals the most important improvement objects. The approach selected to respond to this need was learning by benchmarking. The approach can naturally be criticized of only being able to provide as good results as the benchmarks are, and even that with a time lag. However, it can be a useful tool for learning efficiently from the company's internal practices.The starting point of our approach was the assumption that despite the perceived differences of the distributing units, the best or at least better practices can be found and distributed within the company. The organizational structure, which consists of independent units, gives a good ground for benchmarking the inventory practices of the units. One of our goals was to keep the analysis practically relevant and facilitate the implementation as much as possible, taking into account, as far as possible, the roles of an analysis presented in Niemi et al. (2007). Therefore, our approach follows the following four principles:1. Refining stepwise the precision of the analysis. The analysis should proceed stepwise from higher aggregation towards details only that deep that is seen necessary.2. Accepting the practical limitations of the data. The work of estimating parameters for the calculations with the management should be avoided as far as possible. The calculations should be done in order to find and point out the different development areas of the inventory management of each depot.3. Disaggregating the task to be benchmarked to manageable sub-areas. Though it could be seen in the aggregate performance figures that none of the units was a good benchmark as a whole, it can be expected that acceptable performance and practices will be found in disaggregated areas.4. Engaging the management and key implementers in the analysis work. The aim of each analysis phase is to present reliable figures so that it is easy to combine with the management's subjective knowledge.Quantitative inventory data is used to reveal two kinds of possible deviations in the performance of the individual units:1. The actual inventory levels of a particular unit are potentially too high when compared to items with a similar demand structure in other units.2. The actual inventory levels of a particular unit are potentially too low to maintain the service level when compared to items with a similar demand structure in other units.The purpose of the approach is to find out specific cases where the outcome of the effort gives the highest performance improvement. Besides, resulting in quick wins in some areas, the approach prepares the organization to a more disciplined way of managing their inventories. The following section presents the benchmarking procedure in detail, using the case company data as an illustration.\nBenchmarking analysis framework: Underlying logic of the benchmarking analysis\nDiscussion: In managerial practice there has to be a balance between the effort and the aimed level of precision. The benchmarking approach presented in the paper has been proposed to satisfy the managerial need for simple but effective tools to find the improvement objects in the field of inventory management. The main difference of the benchmarking approach to, e.g. optimizing approaches is that only the relative differences need to be analyzed instead of determining the absolute optimums of the selected variables. This choice considerably decreases the effort needed, and is also more in line with the level of reliability of the data that can be achieved in practice. According to the authors' experience, in most practical situations it is also sufficient to point out the differences in performance, and the managers have an intuition of the potential reasons behind the low performance. The value of the approach is not in its theoretical sophistication, but it is a complementary approach to the traditional inventory models, which often rely on more or less arbitrary parameter values. The analysis provides realistic, individual inventory objectives and gives suggestions for focused improvement actions.The presented approach relies on the assumption that finding differences is enough to get improvement efforts going, and this can be done by means of simple comparisons. In optimizing approaches, the validity of the used parameters is always more or less questionable, and the approach of relying on sharing practices already in use avoids the problems of lacking and questionable parameters. In the presented approach, quantitative inventory data is used to reveal whether the actual inventory levels of a particular unit are potentially too high when compared to items with a similar demand structure in other units, or the actual inventory levels of a particular unit are potentially too low to maintain the service level when compared to items with a similar demand structure in other units.The logic of the benchmarking approach follows four principles which were, as the main finding of this study, found useful in carrying out the inventory benchmarking approach. Firstly, the analysis proceeds stepwise from higher aggregation towards details only as deep as is seen necessary. Besides, this, the approach potentially saves analysis effort, as already the first steps may point out the development needs, which can have a significant effect on the commitment of the managers and make them involved in the process. Secondly, the process shows how most of the parameter estimation work can be avoided. Traditionally, the logic of inventory management is to start with setting the service level objectives and fixing variables for calculations. By accepting the practical limitations of the data, e.g. in the case study, the fact that the company was not able to quantify actual and therefore the target service levels, either, the approach bringing in the quantified ITOs to be joined together with tacit knowledge on actual service levels can serve as a learning process and lead towards better measurement and better abilities to adapt traditional optimizing approaches. In the case, avoiding the estimation of parameters for calculations with the management and replacing it with discussion on the causes and effects of performance differences between generic product groups turned out to be a recommendable practice.The third principle, disaggregating the task to be benchmarked to manageable sub-areas to find the best performance and practices in disaggregated areas was a bit more exhaustive task than expected beforehand, mainly because it was necessary to go to a more disaggregated level than expected to find clear performance differences and their explanations. Finally, a prerequisite for the whole approach is engaging the management and key implementers in the analysis work. It can be said that without it the stepwise approach would be obsolete.To conclude the main finding of the study, we suggest that the behavioral elements of decision-making should not be underestimated in the implementation of all kinds of new tools and practices: resistance to change is inherent in all improvement work, and the credibility of the input data is often in a decisive role. It would be highly recommendable to take this aspect into consideration already in the analysis stage and to develop analysis procedures that have a positive effect on the management's commitment to the implementation of the results. In the presented approach, the managers are asked to comment on the findings after each phase, and encouraged to provide their intuitive views about the reasons behind the observed differences in performance. Using as reliable information as possible, by means of generic variables, the need for change gains the necessary credibility among the managers.In the analysis, we introduced the idea of using logistically generic product groups for comparison purposes, and suggested three variables, demand volume, demand variation, and replenishment leadtime to be used in the grouping. The selection and use of the generic variables needs more thorough examination and remains the subject for further research. As the performance indicator we used inventory efficiency, which aims at neutralizing the performance differences due to environmental and structural conditions by relating the actual performance to the goals, which are based on unified policies among the units. Inventory efficiency, which is formed as a quotient of two relative performance measures, perhaps too theoretical for practitioners to perceive, and it needs to be further considered what would be the best way to measure and demonstrate the performance differences in practice.One important reason for carrying out a case study is to raise questions on phenomena behind the observations. From the authors' point of view, maybe the most interesting one is the perception that there seems to be a fit between the sophistication level of the adapted techniques and some organizational factors. The benchmarking approach producing practical, easy-to-asses guidelines to manage inventories, is suitable for situations where the parallel structure of similar units is considered best from organizational point of view. An interesting question is what are the organizational determinants besides skills defining the sophistication level of tools and practices to be implemented?\n", "role": "user" }, { "content": "Purpose: - The paper aims to help identify the best logistical practices and to implement them with minimal organizational resistance in an environment with relatively similar business units utilizing an internal benchmarking approach.\nDesign/methodology/approach: - A stepwise benchmarking process, where the use of basic inventory control practices is complemented with benchmarking analysis among business units that are of the same kind, but not entirely similar. The viewpoint is that of a supply chain specialist, and the main objective of the paper is to present how the analysis should be carried out.\nFindings: - Four principles were found useful when carrying out the analysis for internal benchmarking: refining stepwise the precision of the analysis, accepting the practical limitations of the data, disaggregating the task to be benchmarked to manageable sub-areas, and engaging the management and key implementers in the analysis work.\nOriginality/value: - The approach is a complementary approach to the traditional inventory models, which often rely on more or less arbitrary parameter values. The analysis provides realistic, individual inventory objectives and gives suggestions for focused improvement actions.\n", "role": "assistant" } ]
A unified perspective on the factors influencing consumer acceptance of internet of things technology
[ "Information systems", "Innovation diffusion", "Hybrid intelligent business systems", "Innovation adoption behaviour", "RFID application to logistic" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: The term internet of things (IoT) was coined in 1999 by Ashton (2009), a British technology pioneer who helped develop the concept (Gubbi et al., 2013). The IoT aims to extend the benefits of the regular internet - constant connectivity, remote control ability, data sharing, and so on - to goods in the physical world (Peoples et al., 2013). On the most basic level, the IoT involves fitting objects with a microchip and a communications antenna (Deutsche Welle, 2012). Using radio frequency identification (RFID), every real object in the analogue world could have a unique identifying number, like an IP address (Gubbi et al., 2013). Today, RFID chips are used by many companies to manage their inventories. They also make passports scannable by Homeland Security, and enable farecards to be read at subway terminals. Farmers use the chips to keep track of livestock. In 2011, the world spent an estimated $6.37 billion on RFID chips, but that market is expected to balloon to more than $20 billion by 2014 (RFIDWorld.ca, 2012). According to Cisco Internet Business Solutions Group (Deutsche Welle, 2012), the number of objects connected to the internet exceeded the number of people on earth in 2008. ABI Research (2013), a market research firm, predicts that more than 30 billion devices will form an IoT by 2020. Having lots of things connected will change our life and the IoT technologies have the potential to change the world, just as the internet did (Schlick et al., 2013).\nLiterature review: IoT technology\nConceptual framework and research hypotheses: In this study, consumer acceptance is defined as the relatively enduring cognitive and affective perceptual orientation of an individual. Similar to previous work, the construct of intention to use is adopted and considered as a proxy for consumer acceptance (Mathieson, 1991; Venkatesh and Davis, 2000). This is a particularly suitable concept since empirical findings underscore the idea that intention to use is an appropriate predictor of later usage (Lee et al., 2012; Sheppard et al., 1988). A model depicting the relationship between antecedents' variables and intention to use IoT technologies is presented in Figure 1. The attitude construct in the original TAM has been removed from the model for simplicity purpose (Luarn and Lin, 2005). On the theoretical front, an abundance of research studies have reported a strong and significant causal relationship between user beliefs and behavioral intention of technology (Lee et al., 2012; Lu and Su, 2009; Song et al., 2008; Zhang and Mao, 2008). It is therefore theoretically justifiable to use behavioral intention as a single dependent variable to examine the acceptance of IoT. This study expects that perceived usefulness, perceived ease of use, trust, social influence, perceived enjoyment, and PBC will be positively related to behavioral intention toward the use of IoT technologies. In addition, perceived ease of use and trust will be positively related to perceived usefulness.\nMethodology: Data collection\nResults: Following the two-step approach recommended by Anderson and Gerbing (1988), we first examined the measurement model to test reliability and validity. We then examined the structural model to test the research hypotheses and model fitness.\nDiscussion: The study aims to analyze the factors that influence users' acceptance of IoT technologies in an integrated manner. The results confirm the robustness of the TAM, explaining technology acceptance behavior for users within the context of IoT technologies. Suggesting an extended TAM to which one social context variable (i.e. social influence) and two individual user characteristic variables (i.e. perceived enjoyment and PBC) are added, the study finds that a consumer's perceptions of usefulness, ease of use, behavioral control, enjoyment, and social influence are predictive of his/her intention toward using IoT technologies.\nImplications: This study has both theoretical and practical implications. The main theoretical contribution is it theorizes the factors influencing the consumer acceptance of IoT technologies, from a unified perspective, including technology perceptions (perceived usefulness and perceived ease of use), social context variable (social influence) and user characteristics (perceived enjoyment and PBC). It links the constructs of social influence, enjoyment, and PBC to the TAM and successfully extends TAM in the IoT technology context, which differs from the context of other information systems. This integrated model achieves the greater understanding of consumer acceptance of IoT technologies in China while remaining the parsimony of the model in the same time. The study contributes to current knowledge pertaining to consumer behavior and marketing in two ways.\nLimitations and directions for future research: This research has some limitations. First, this study did not incorporate actual usage behavior into the proposed model. However, substantial empirical evidence exists regarding the causal link between intention and usage behavior (Lee et al., 2012). Second, it should be noted that a cross-sectional research design does not provide as much insight as does a longitudinal research design. For this reason, future research should adopt longitudinal data, in order to further analyze the time sequence of the relationships among constructs. Third, although the study aims at exploring the area of consumer acceptance of IoT technologies in emerging economies, the focus was only on one such economy - China. As there may be very different sociocultural beliefs, governmental regulations and business norms compared with other developing economies, it is recommended that future research should expand the boundaries to investigate IoT acceptance in other developing economies (e.g. India, Russia), in order to verify the validity of the model established in the present study. In addition, future research needs to replicate the study using different IoT product categories to improve generalizability of the research model. Qualitative research is also encouraged for future research to gain better understanding of what consumers' perspectives towards IoT technology.\n", "role": "user" }, { "content": "Purpose: With recent advances in internet technologies, internet of things (IoT) technology is having an increasing impact on our daily lives, and beginning to offer interesting and advantageous new services. The current research aims to develop and test an integrative model of factors determining consumers' acceptance of IoT technology.\nDesign/methodology/approach: Based on technology acceptance model (TAM), the authors proposed an IoT acceptance model that consists of three technology factors (perceived usefulness, perceived ease of use, and trust); one social context factor (social influence); and two individual user characteristics (perceived enjoyment and perceived behavioral control). Data from 368 Chinese consumers were used to test the research model through the use of structural equation modeling.\nFindings: The results showed particularly strong support for the effects of perceived usefulness, perceived ease of use, social influence, perceived enjoyment, and perceived behavioral control. However, trust played an insignificant role in predicting the intention. In addition, perceived ease of use and trust were found to affect perceived usefulness. Compared with the individual TAM model, the integrated model provides more explanation on user behavioral intention toward IoT usage.\nOriginality/value: The integrated model explores the driving factors of individuals' willingness to use IoT technology from the perspectives of the technology itself, social context and individual user characteristic. It links the constructs of social influence, enjoyment, and perceived behavioral control to the TAM and successfully extends TAM in the IoT technology context.\n", "role": "assistant" } ]
Effect of the use of dry breadcrumb in meatball production on the formation of heterocyclic aromatic amines
[ "Breadcrumb", "Meatball", "Heterocyclic aromatic amine", "Cooking" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Bread is one of the most consumed grain products in Turkey and is consumed in variety of types such as phyllo bread, flat bread (pide), lavash and basil bread depending on the region (Anonymous, 2015). Furthermore, bread is also a food product that can be consumed together with all kinds of food dishes, which makes it a common companion for all meals. Bread is especially consumed in countries that have economic difficulties because it serves as a staple food product. As a result, in our country, bread is found on the tables at every meal as an irreplaceable food product, mainly due to the dietary habits and the socioeconomic structure of the Turkish society (Gocmen, 2001). In addition, it is also a very affordable food product, which makes it an indispensable staple food for every meal. However, due to the low manufacturing quality of bread and also due to the fact that bread stales quickly as a food product, it inadvertently causes the bread waste in our country to reach very high levels (Gocmen, 2001). Besides being consumed alongside meals, bread is also involved in the preparation of various food dishes. Furthermore, unused breads can also be used in making meatballs after being dried. Thus, bread waste can be partially prevented, and by using dried breads, various advantages can be gained while making meatballs (taste, aroma, texture, cost, etc.).The demand for ready-made food products is increasing day by day due to various factors such as women taking active part in business life and due to changing working conditions and so on. Thus, meatball has an important place among these ready-made food products, and it is widely liked by the Turkish society. Meat and meat products which have an important place in our daily nutritional intake due to their nutrients also contain various harmful substances which exist naturally within meat or which may be formed later during the storage or the cooking process. Heterocyclic aromatic amines (HAAs) formed in cooked meat and meat products are one of these harmful chemical compounds (Skog et al., 1998).Japanese scientists first discovered HAAs in grilled meat and fish in 1977 (Oz and Kaya, 2011a). To date, more than 25 HAAs have been isolated and identified from foods (Oz and Kotan, 2016; Oz, 2019). Epidemiological studies performed on HAAs have revealed that all HAAs are carcinogenic in nature and most of them are also mutagenic (Skog et al., 1998). Types and amounts of HAAs formed in meat and meat products can change depending on the meat type, cooking conditions, pH of meat, water activity, carbohydrate content, free amino acid content and type, creatine content, heat and mass transfer, fats, fat oxidation and the presence as well as the amount of antioxidants present (Skog et al., 1998; Oz and Kaya, 2011a).Meatball is considered as a regional food product, which embodies the various geographical regions and the social structure of our country. Meatball also has an important place both in ready- to-serve food industry and in household use. Many research studies including our study in the literature were conducted on the formation and reduction of HAAs in meat and meat products. However, although dry breadcrumb is frequently used both in domestic type and commercial meatball formulations, to the best of our knowledge, the effect of using dry breadcrumb in meatball production on HAAs formation has not been investigated in the literature. Therefore, in the present study, the effects of the use of dry breadcrumb in different ratios (5 percent, 10 percent and 15 percent) in meatball production on the formation of HAAs in meatballs cooked at different temperatures (150 degC, 200 degC and 250 degC) and some quality parameters of the meatballs were determined. The present study is very important to get a database and to evaluate the relationship between meatball consumption and health.\n2. Material and methods: 2.1 Raw materials and meatball production\n3. Results and discussion: 3.1 Chemical and physicohemical properties of beef, intermuscular fat and raw meatball\n4. Conclusion: In the present study, it was determined that the usage of dry breadcrumb in the preparation of meatball caused a reduction in water content, cooking loss and lipid oxidation values of the meatballs. On the other hand, it is quite difficult to compare the total HAAs content obtained in the present study with the total HAAs content obtained from the results found in the literature. The main reason for this is because this is the first study where HAAs are examined in meatballs cooked by adding dry breadcrumb. Furthermore, it is also known that there are many factors (meat type used in cooking, cooking method, cooking temperature, cooking time, existence and amount of HAA precursor, etc.) affecting the HAAs content. Despite all these issues, total HAAs amount calculated in meatballs analyzed in the present study is quite lower than the data in the literature.\n", "role": "user" }, { "content": "Purpose: Heterocyclic aromatic amines are mutagenic and/or carcinogenic compounds formed during cooking of meat. Therefore, the formation levels of them should be reduced. For this aim, the purpose of the present study was to determine the effect of dry breadcrumb in meatball production on the formation of heterocyclic aromatic amines. In addition, the effect of dry breadcrumb on some quality parameters of the meatballs was also investigated.\nDesign/methodology/approach: Meatball dough with 15 percent fat was divided into four parts, where one group was selected as the control group (without dry breadcrumb). In the other three groups, dry breadcrumb was added in various ratios of 5 percent, 10 percent and 15 percent (w/w). Then, meatball dough was stored at 4 degC for 3 h and shaped into meatballs with a metal shaping device (7 x 1 cm). For some of the quality parameters, water, pH, cooking loss and TBARS analyses were done in the samples. Heterocyclic aromatic amine analysis was done in only cooked samples according to solid phase extraction.\nFindings: The use of dry breadcrumb in meatball production decreased water content and cooking loss. As cooking temperature increased, water contents and cooking loss of meatball decreased. PhIP, AaC and MeAaC were not detected in any of the samples analyzed. As cooking temperature increased, total amount of HCAs increased and ranged between 0.05 and 0.51 ng/g. While the use of dry breadcrumb in the meatball production increased IQ content of the meatballs, the use of 5 percent dry breadcrumb caused a decrease in the total HAA content (28.57-66.67 percent) for all cooking temperatures.\nOriginality/value: Many research studies including our study in the literature were conducted on formation and reduction of HAAs in meat and meat products. However, although dry breadcrumb is frequently used both in domestic and commercial meatball formulations; to the best of our knowledge, effect of using dry breadcrumb in meatball production on HAAs formation has not been investigated in the literature. Therefore, in the present study, the effects of the use of dry breadcrumb in meatball production on some quality parameters and formation of HAAs in meatballs cooked at 150 degC, 200 degC and 250 degC were determined.\n", "role": "assistant" } ]
Academic optimism and community engagement in urban schools
[ "United States of America", "Elementary schools", "Urban education", "Academic optimism", "Community engagement", "Critical success factors", "Students", "Student achievement" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: In 1966, Coleman et al. reported that schools only accounted for 5 to 38 percent of the variance in achievement among students of different ages, races, and residence. The level of success that children experience falls \"along race and social class lines\" (Skrla et al., 2001, p. 239). Student achievement has been linked to family background differences, school differences, and the racial makeup of a school's student body. Skrla and colleagues asserted that this notion is the \"central problem of education\" (p. 239).\nTheoretical perspectives: Seligman (2006) posited three aspects of success: talent, desire, and optimism. He proposed that optimism matters as much as, if not more than, talent and desire. Anthropologist Lionel Tiger (1979) defined individual optimism as:[...] a mood or attitude associated with an expectation about the social or material future - one that evaluator regards as socially desirable, to his [or her] advantage, or for his [or her] pleasure (as cited in Peterson (2000)).Seligman (2006) explained that optimists have a way of explaining bad outcomes; these explanations allow the optimist the energy to try again because to them, the failure is not permanent and it is controllable. As Hoy et al. (2006, p. 143) concluded:People who believe that bad outcomes are controllable have a greater sense of agency. Positive expectations and agency come together in a sense of hope that pathways can be identified to reach desired goals.Peterson (2000, p. 45) proposed that optimism is not merely a cognitive function, but that it also has an \"emotional flavor\", invoking the affective realm. As such, academic optimism has been conceptualized with cognitive, affective, and behavioral aspects. Collective efficacy, trust in students and parents, and academic press are representative dimensions of the cognitive, affective and behavioral aspects of academic optimism (Hoy et al., 2006, 2007; McGuigan and Hoy, 2006; Smith and Hoy, 2007; Wagner, 2008). \"Collective efficacy reflects the thoughts and beliefs of the group; faculty trust adds an affective dimension, and academic [press] captures the behavioral enactment of efficacy and trust\" (Hoy et al., 2006, p. 143). Academic optimism, through its \"rich picture of human agency\" (Hoy et al., 2006, p. 143), gives clarity to collective behavior in the cognitive, affective, as well as behavioral elements, as they work together to create an academic environment that is positive.\nConceptual framework: Positive psychologist Seligman (2006) argued that while talent and motivation are important, optimism is crucial and can more than compensate for lack of talent or motivation. Those who are optimistic tend to not feel helpless when bad events happen. Rather, it is the optimists who become \"energized when [they] encounter the everyday setbacks as well as momentous defeats\" (Seligman, 2006, p. 16). Pessimists, on the other hand, feel helpless when faced with setbacks and are less resilient than optimists. Importantly, Seligman stressed that helplessness is not innate and can be unlearned and replaced by learning to explain events in a more optimistic way. Seligman (2006, p. 290) called this \"exploiting the strengths of the maximal self [...] [to] give us a permanent skill for warding off depression [and to] help us achieve more and have better health\".In schools, the findings from emerging research are clear: academic optimism and high-community engagement are common characteristics of effective schools (Hoy et al., 2002, 2007; Kirby and DiPaola, 2009; McGuigan, 2005; Wagner, 2008). Hoy et al. (2006) coined the term \"academic optimism\" to show teachers' beliefs about collective behaviors and dispositions of the collective in their schools. Optimism can be learned, developed, and nurtured among stakeholders in a school. Since only one prior study has focused on urban elementary schools (Texas) (Smith and Hoy, 2007), this study sought to test those findings by examining academic optimism and community engagement across all schools in one urban district, controlling for SES. It is hoped that these two school characteristics can shed light on how leaders can build an empowered faculty who believe that their students' success lies far beyond racial and social factors.Academic optimism\nSES and student achievement: The findings of several studies have suggested that SES has an impact on student achievement (Coleman et al., 1966; Hoy et al., 2006, 2007; Hoy and Hannum, 1997). Recall that Coleman et al. (1966) found that family background was the single most important variable that predicted student achievement in school. However, many researchers, distressed by those results, have found several characteristics of effective schools that matter more than student SES matters (Edmonds, 1979; Hoy et al., 1998, 2002, 2006, 2007). While student SES continues to influence student achievement dramatically, initial research indicates academic optimism may well emerge as one more of those effective school characteristics (Figure 1).\nHypotheses: H1. Academic optimism of urban teachers is positively related to their students' achievement.H2. Community engagement of an urban elementary school is positively related to its student achievement.H3. The three factors of academic optimism (collective efficacy, trust in clients, and academic press) are positively related to community engagement of urban elementary schools.H4. The three factors of academic optimism (collective efficacy, trust in clients, and academic press) and community engagement predicts to student achievement in urban elementary schools.\nData collection and sample procedures: In all, 1,292 usable surveys were collected from full-time teachers and professional instructional faculty from 35 public elementary schools in an urban center in southeastern Virginia. The data collected for this study were part of a larger study initiated by the school district in an attempt to understand teacher and student perceptions of instructional policies and practices, as well as other issues that influence school culture and climate.Among students in the schools, 62 percent of students are black, 23 percent are white, and nearly 5 percent are Hispanic. Since the school is the unit of analysis for this study, all data were aggregated to that level. Table I provides a more detailed description of the sample and a comparison to Virginia's elementary schools.\nInstrumentation: Teacher Climate Survey 2008-2009: The school district's Teacher Climate Survey was used to collect data for this study; this survey was designed by the lead researcher and the school district using existing measures. The survey examined several teacher climate variables: teacher self-efficacy, teacher collective efficacy, teacher trust in the administration, teacher trust in colleagues, and teacher trust in clients. The survey also measured school climate variables: collegial leadership, teacher professionalism, academic press, community engagement, and organizational citizenship behaviors. The variables used in this study were teacher collective efficacy, faculty trust in clients, academic press, and community engagement. In this study of 35 schools, the 27 items employed in this research demonstrated a robust Cronbach's a reliability of 0.98.Collective efficacy\nResults analyses and discussion: Descriptive statistics, found in Table II, were computed for each of the three factors of academic optimism (collective teacher efficacy, faculty trust in clients, academic press), community engagement, and student achievement in grades three, four, and five in reading and math. This study also controlled for student SES in an effort to determine the most accurate relationships and effects of academic optimism and community engagement. Data were aggregated to the school level. The mean score for academic optimism was determined by averaging the scores for each of the three factors within the construct.The school district's evaluation, assessment, and support department provided the mean scaled scores for grades three, four, and five reading and math Virginia SOL scores; SOL scores range from 200 to 600. A score of 400 is considered passing at a minimally proficient level; 500 is deemed passing with advanced proficiency.Collective teacher belief scale as the measure for collective efficacy\nImplications to research and practice: Schools are under much pressure to close the achievement gap between students of different races, those who are rich or poor, and those who live in suburban or urban areas. NCLB (2001) was the federal response calling for more accountability. As schools strive to meet their federal benchmark goals, researchers and practitioners have looked to organizational properties that schools can nurture and build in order to increase student achievement and promote their learning (Bryk and Schneider, 2002).Teacher beliefs can either help or hinder the learning process of students (Agne et al., 1994). The findings from this study suggest that practitioners may need to examine their beliefs and attitudes, which affects their motivation in teaching students from low socioeconomic homes. This has implications for researchers and practitioners as they continue to look for ways to improve student achievement. Specifically, this study looked at school processes in the social context that have recently emerged from research as showing significant, positive relationships with student achievement. Academic optimism and community engagement were found to work in ways that improve student achievement. Understanding the social contexts in classrooms and schools allows education leaders to work with faculty in examining current practice in an effort to improve the educational outcomes for all students, even those who must overcome the obstacles to learning posed by their low SES.Academic optimism\nRecommendations for further research: Further research on the relationships among academic optimism, community engagement, and student achievement, along with an investigation as to the relative effects of academic optimism and community engagement on student achievement should be done in order to account for the differences in state standards and assessments, as well as grade levels. These studies may provide additional information and insight into how these constructs operate in different learning contexts.Finally, the findings of this study suggest that neither academic optimism nor community engagement independently predict student achievement, when controlling for student SES. While collectively, student SES and academic optimism did predict student achievement in this study, a prior study in Texas (Smith and Hoy, 2007) in schools of slightly larger with similar demographics found that academic optimism was as important as student SES (b=0.34, p<0.01, b=-0.34, p<0.01) when predicting student achievement in fourth grade math as measured by the Texas Assessment of Knowledge and Skills Test (TAKS). It would be interesting to explore the relative effects of student SES and academic optimism in addition to examining state standards and assessment alignment for an explanation of the differences.\nFinal thoughts: NCLB (2001) has forced school districts to look at how they are meeting the needs of all students, particularly those from subgroups of students previously marginalized. Each of the aspects of academic optimism and community engagement has been linked to high student achievement. Educators must reject the notion that they have little influence over their students' achievement. School leaders must foster an environment of high expectations, where parent outreach actively occurs, and teachers believe they can do their jobs well and that they are supported by a professional collaborative community (Bryk and Schneider, 2002). This kind of environment promotes student performance and achievement, where students are engaged in the learning process and are able to demonstrate their knowledge and understanding, as well as achieve at high levels.Academic optimism challenges researchers and practitioners alike to examine the potential power the collective has over student achievement. Investigating teacher attitudes and beliefs about students from low socioeconomic homes may provide insight into teacher motivation and student achievement. There is much that teachers and schools can do to increase student achievement of all students, even those who must overcome the obstacles to learning posed by low SES. Schools seeking to build bridges to connect with families and community may have an edge over schools that seek to buffer the school from these influences. Education leaders who view the community as a resource rather than a threat run schools where students tend to achieve at higher levels (DiPaola and Tschannen-Moran, 2005). Education leaders must be mindful and act in ways that empower teachers through building collective efficacy, working to build trust between teachers, parents, and students, all the while emphasizing high academic achievement.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to examine the relationships among academic optimism, community engagement, and student achievement in urban elementary schools across one district.\nDesign/methodology/approach: - Data were collected from all 35 urban elementary schools across one district in Virginia, USA. Correlation, multiple regression, and factor analyses were used to test the hypotheses.\nFindings: - In schools where the faculty are optimistic that their students can succeed despite the obstacle of low socioeconomic status (SES) and where the community is engaged, students are more likely to achieve at higher levels. Findings of this study also supported that community engagement, collective efficacy, trust in clients, and academic press do act as predictors to collectively influence student achievement.\nResearch limitations/implications: - The Goddard measure for collective efficacy was replaced with one developed for more challenging settings such as urban schools.\nPractical implications: - Academic optimism and community engagement were found to work in ways that improve student achievement. Understanding the social contexts in classrooms and schools allows education leaders to work with faculty in examining current practice, in an effort to improve the educational outcomes for all students, even those who must overcome the obstacles to learning posed by their low SES.\nOriginality/value: - With only one previous study of this construct in an urban elementary setting, the current study sought to test those findings in an effort to continue pushing this research agenda into urban settings.\n", "role": "assistant" } ]
Designing optimal double sampling plan using the weighted Poisson distribution
[ "Acceptable quality level", "Double sampling attributes plan", "Limiting quality level", "Minimum risk plan", "Weighted Poisson distribution" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Associated with each sampling plan there is an operating characteristic (OC) curve which describes clearly the performance of sampling plan against good and poor quality. Sampling plans are usually selected for given two points on the OC curve are usually specified namely (p\n2. WPD: Rao (1965) introduced the concept of weighted distribution when the samples are recorded without a sampling frame that enables random samples to be drawn. The weight function that usually appears in the scientific and statistical literature is o(X)=X, which provide the size-biased version of the random variable. The size-biased version of order k, which corresponds to the weight o(X)=X\n3. Selection of double sampling attributes plan using the WPD: The double sampling plan has the following parameters:\n4. Comparison with Govindaraju and Subramani's procedure of selecting double sampling attributes plan: Duncan (1986) has presented tables for the selection of double sampling plans for given p\n5. Construction of table: Table IV is constructed assuming the weighted Poisson model for OC curve. While the weighted Poisson model is exact for the case of non-conformities, it is a good approximation to the binomial and Poisson models when n is large and p is small such that np\n6. Conclusion: The main objective of this paper is the consideration of overall risk for a wrong decision as optimization criteria. The plans tabulated here refer to the operating ratios which are often encountered in practice. The sum of risks may be minimized rather than fixed them at given levels, when the \"producer\" and the \"consumer\" belong to the same company or interest. Table IV presented in this paper has been developed using the WPD for selecting double sampling plan. Based on the analysis carried out, we can arrive at a conclusion that the WPD further reduces the sum of risks when compared to Poisson distribution (see Table V). The OC curve of the double sampling plan using the weighted Poisson model has better \"shoulder\" in comparison with Poisson model (see Figure 1).\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to present the optimal double sampling attribute plan using the weighted Poisson distribution.\nDesign/methodology/approach: For the given AQL and LQL, sum of producer's and consumer's risks have been attained. Based on the weighted Poisson distribution, the sum of these risks has been optimized.\nFindings: In the final inspection, the producer and the consumer represent the same party. So, the sum these two risks should be minimized. In this paper, the sum of risks has been tabulated using the weighted Poisson distribution for different operating ratios. These tabulated values are comparatively less than the sum of risks derived using Poisson distribution.\nOriginality/value: The sampling plan presented in this paper is particularly useful for testing the quality of finished products in shop floor situations.\n", "role": "assistant" } ]
Needs and technology adoption: observation from BIM experience
[ "Information technology strategy", "Innovation", "Decision making", "Information management", "Motivation", "Change management" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The architecture engineering and construction (AEC) sector struggles with significant gaps between the available technology and the technology used in practice (Peansupap and Walker, 2005; Moore and Dainty, 1999; Taylor and Levitt, 2004). Accordingly, the diffusion of ICT and Building Information Modeling (BIM) applications has been the focus of recent research in construction (e.g. Samuelson and Bjork, 2013; Bernstein, 2012; Khosrowshahi and Arayici, 2012; Azhar, 2011; Arayici et al., 2011; Gu and London, 2010; Bew and Underwood, 2010; Succar, 2009; CRC, 2009).\n2. Background: This research builds on the apparent congruence between two seminal works, namely, Roger's theory on technology adoption and innovation diffusion, and Maslow's theory on human needs. This investigation is also significant because innovation and needs are believed to be mutually related such that either innovation is a response to an identified need or innovation leads to the explication of a latent need (O'Sullivan and Dooley, 2008; Heskett, 2002; Sheth, 1981). Therefore, this section looks at related literature on technology, innovation diffusion and needs.\n3. Research methodology and data collection: This research was conducted in multiple phases between 2007 and 2014, such that the later phases were planned as a follow-up of the findings from the earlier phases. In the first phase (2007-2008), data were collected through focus group interviews (FGIs) and one-on-one interviews with experts. Two FGIs were conducted in Australia with representatives from leading AEC stakeholders from different backgrounds including architects, academic researchers, contractors, clients, design managers, consultants, facility managers, government agencies and software vendors. The participants were either early adopters of BIM in Australian AEC industry or were considering BIM adoption. Each FGI included between 12 and 15 participants and the duration for each FGI was 4-5 hours. A breakdown of the key sessions in the FGIs is presented in Appendix 1. The FGI discussions revolved around the technical requirements for BIM-based collaboration platforms and potential challenges to adopting integrated design practice using such a collaboration platform. The FGI discussions were recorded on tape for transcription and analysis. The FGI discussions were followed by one-on-one interviews and field observations with some of the FGI participants who agreed to show their ongoing BIM adoption and implementation activities in their organization. Based on the data collected in the first phase, an open ended analysis of the data was conducted to identify the main themes and discussion patterns.\n4. Results and observations: In the empirical study, there was a general consensus across the participants that despite the perceived utility and acknowledged need for BIM and related systemic innovations, the priority for most stakeholders to focus on ongoing projects that bring revenue and require risk mitigation, inhibit adoption across the industry. The long-term benefits of adoption are often overlooked in favour of short term goals that are specific to the ongoing projects. A simpler explanation based on the responses is that for early adopters the need for the technology and innovation was high up their priority while for the others the need for the innovation was dominated by their other needs. That is, a hierarchical ordering of innovation-related needs is observed.\n5. Discussion: The main contribution of this paper is establishing the congruence between Maslow's motivational theory of needs and Roger's theory on technology adoption and innovation diffusion, with a preliminary validation based on empirical studies on BIM adoption in the AEC sector. This novel approach of studying technology adoption and innovation diffusion from the hierarchy of needs perspective shows three distinct aspects of adoption and innovation-related needs.\n6. Conclusion: The main contribution of this research has been establishing the congruence between Maslow's motivational theory of needs and Roger's theory on technology adoption and innovation diffusion, supported by a preliminary empirical validation of this congruence in the context of BIM adoption in AEC sector. This paper shows that technology adoption and the diffusion of systemic innovations in organizational networks can be explained through the hierarchy of innovation-related needs of the different actors. It is shown that actors, both at individual and organizational level, have primary and secondary needs associated with innovation and technology adoption. The primary and secondary needs were found to apply across each of the three types of innovation-related needs that were identified, namely, \"need to innovate\", \"need for the innovation\" and \"need for the diffusion of the innovation\". Further, based on abduction, it is hypothesized that innovation and adoption-related needs of actors can be explained in terms of stable state and excited state needs. Actors that are averse to innovation or innovation adoption in stable state can be driven by external perturbation to excited states of innovation or innovation adoption. Similarly, it is proposed that environmental perturbation could inhibit innovation or innovation adoption behaviour for some actors in excited state, who are otherwise motivated to innovate or adopt innovation in their stable states.\n", "role": "user" }, { "content": "Purpose: Despite the recognized role of motivation of actors in technology adoption decisions, there is limited understanding of the psychological processes underlying the motivation. The purpose of this paper is to explore this gap by investigating Building Information Modeling (BIM) adoption from the viewpoint of Maslow's motivational theory on hierarchy of needs.\nDesign/methodology/approach: This research uses mixed methods. Initially theoretical arguments establish the suitability of Maslow's hierarchy of needs as the conceptual framework to investigate technology adoption. The hypotheses and research questions are investigated using data collected through focus group interviews, interviews and field observations in Australian architecture engineering and construction (AEC). The findings are validated with a survey of BIM adoption cases reported in literature, and additional interviews conducted in Finnish AEC sector. Finally, abductive reasoning is applied to seek the best possible explanation for the observed patterns.\nFindings: It is found that besides individuals, organizations also demonstrate hierarchical ordering of innovation-related needs. Three broad categories of innovation-related needs are identified. Using abduction, the innovation-related needs of actors are described in terms of stable and excited states.\nResearch limitations/implications: The findings are primarily based on studies conducted in regions with developed economies.\nPractical implications: This research shows that Maslow's hierarchy of needs could be a useful diagnostic framework to assess actors' response towards technology adoption.\nOriginality/value: This investigation into the potential usefulness of Maslow's theory into understanding technology adoption is by itself a novel research contribution. The finding that hierarchical view of needs can partly explain the adoption decisions of both individual and organizational actors is an original contribution.\n", "role": "assistant" } ]
Studies show link between gender stereotyping and harsher punishments for workers
[ "Gender", "Dismissal", "Punishment", "Stereotypes" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Researchers have produced two studies showing the strong impact of gender stereotypes on punishment recommendations for badly behaved workers in the USA. The first study revealed that both men and women who engaged in gender-stereotypical counterproductive work behaviour (CWB) were more likely to lose their jobs than women and men whose bad behaviour did not conform to gender stereotypes. The second study's results were more complex and harder to explain as they suggested that women tended to receive harsher punishments for bad behaviour that conformed to female stereotypes, but the same finding was not apparent for men and male stereotypes, unlike in the first study.\n", "role": "user" }, { "content": "Purpose: The researchers set up two studies to test their theory that gender-stereotypical counterproductive work behaviors (CWBs) were more likely to be harshly punished than non-gender stereotypical ones.\nDesign/methodology/approach: Two rival theories were used. Expectancy-confirmation theory suggests people process behavior more easily when it fits in with their expectations. But expectancy-violation theory argues the polar opposite. It states that perceivers react more negatively when bad behavior is inconsistent with their expectations. They carried out two studies. The first one analyzed the appeals of 197 federal employees against their CWBs. The second one was a simulation using MBA students to test the findings of study one.\nFindings: Study one confirmed the concept of expectancy-confirmation theory for both men and women. Gender-stereotypical CWBs led to more dismissals. The second study supported the first study's result, but only for women. The opposite result prevailed for men.\nOriginality/value: The studies are important because they show that gender stereotypes can influence the perceptions of individuals in charge of punishment. This is something that organizations need to be aware of as it is important for them to be consistent. Punishment decisions, the authors suggest, \"have important consequences for not only the target of punishment, but also can have a powerful social effect upon the broader organization.\"\n", "role": "assistant" } ]
Value cocreation at its peak: the asymmetric relationship between coproduction and loyalty
[ "Customer loyalty", "Value cocreation", "Coproduction paradox", "Customer coproduction", "Negative effects", "Process enjoyment", "Customer self-efficacy", "Monetary expenditures" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Customers' service experiences that foster pleasurable social interactions and are able to create engaged customers are central to firms' competitive advantage (Karpen et al., 2015; Sorescu et al., 2011) and have been identified as a tier-one research priority for 2014-2016 of the Marketing Science Institute (Marketing Science Institute, 2014). Service research has intensively discussed the customers' role in creating value (e.g. Gronroos and Voima, 2013; Vargo and Lusch, 2004; 2008). Likewise, service management increasingly seeks to understand how value is created through markets to enhance value creation (Archpru et al., 2015). Service practice increasingly involves interactive processes between customers and service-providers such as customer coproduction of products and services from which customers derive a number of substantial benefits, such as increased preference fit and pleasurable experiences (Jaakkola et al., 2015; Merlo et al., 2014; Verleye, 2015), but which also force customers to invest more resources, such as time, skills, and ideas, as compared to a standardized alternative (Heidenreich et al., 2014). The furniture giant IKEA was a pioneer in coproduction activities by outsourcing the assembling of furniture from factory halls to customers' homes. The fast casual dining industry, as another example, makes customers become part of the production process by offering customized food, allowing customers to watch their food being prepared, etc. It is currently the fastest growing restaurant concept in this industry (Investopedia.com). Market leader Chipotle Mexican Grill, for instance, yields annual revenue growth rates of 20 percent, with an annual guest count rising by 5 percent each year (Forbes, 2014). In the restaurant chain Vapiano, customers interact with the cook who prepares pizza, pasta, and salad according to customers' specifications. Customers can further watch the actual manufacture of the pizza dough and pasta and get a glimpse of the herbs the restaurant grows in the dining area while waiting for their table. Customers can also pick herbs from boxes placed on the tables.\nCoproduction and negative effects of coproduction: Companies benefit from coproduction activities in terms of an efficient adaptation to customer needs, lower production costs, and stronger customer-company relationships, such as customer loyalty (Auh et al., 2007; Eisingerich et al., 2014; Heidenreich et al., 2014). Coproduction activities are thus increasingly offered in multitudinous industries, including financial (Auh et al., 2007), mobile (Kristensson et al., 2008), travel (Grissemann and Stokburger-Sauer, 2012), or health care services (Buttgen et al., 2012). Coproduction activities are also beneficial to customers; they offer functional benefits for customers in terms of increased preference fit, flexibility, and greater control over service delivery, and hedonic benefits in terms of fun, enjoyment, and spontaneous delight (Esmark et al., 2015; Miceli et al., 2013; Verleye, 2015).\nModerating effects: We investigate PE and SE in coproducing as moderators to capture both a process-related factor and a consumer characteristic that likely alter the relationship between coproduction and customer loyalty. Both variables are specifically relevant to customer participation behavior (Dabholkar and Bagozzi, 2002; van Beuningen et al., 2009; Xie et al., 2007-2008; Yim et al., 2012). PE pertains to the customer's enjoyment experienced during the coproduction process (Franke and Schreier, 2010); SE is the customer's assessment of his or her own capabilities to specify and produce a service, as required by coproduction processes (van Beuningen et al., 2009).\nEmpirical studies: To test our hypotheses, we applied a mixed-methods approach, in which we conducted one experimental study (Study 1) and two cross-sectional field studies (Studies 2 and 3). Central to all studies is the extent to which a customer engages in coproduction in the form of resource integration, such as investing time, providing ideas, and expressing needs and wants to create the ideal service experience. In Study 1, we tested for nonlinear effects between coproduction and customer loyalty (H1) as well as assessed the potential moderating effects of PE (H2) and SE (H3) in a controlled experimental setting, for which we chose a restaurant context. The aim of Study 2 was to replicate the findings of Study 1 in an alternative setting, namely, in a hotel. Finally, in Study 3, we extended the previous studies in a travel agency setting. In addition to investigating the nonlinear and moderating effects, we tested for a mediating effect of customer loyalty on the relationship between coproduction and customers' actual monetary expenditures. Table I provides an overview of the empirical studies.\nStudy 1: Restaurant experiment: In Study 1, we examined the proposed relationships of coproduction and customer loyalty (H1-H3) in a controlled setting by manipulating coproduction as an independent variable.\nStudy 2: Hotel study: Study setting and sample\nStudy 3: travel agency study: Study setting, sample, and measures\nGeneral discussion: The aim of this research was to assess the \"coproduction paradox,\" that is, the potential curvilinear effects of coproduction on customer loyalty. In doing so, we identify the optimum level of coproduction where value cocreation is the highest. We make three key contributions. First, we demonstrate, empirically and for the first time, the decreasing incremental effects in a coproduction setting. In three studies, across three different industries, we confirm a negative quadratic effect of coproduction on customer loyalty. Up to an optimum coproduction level, a customer exhibits positive loyalty toward the company. Beyond that level, customer loyalty decreases. Thus, coproduction is not always a guarantor of improved company performance, and at a certain level, coproduction likely reaches its limits of effectiveness. After this point, negative effects, in the form of decreased customer loyalty, likely emerge. Thus, an \"optimum\" level exists where both customers and companies benefit. Exceeding as well as underperforming coproduction activities do not pay off, since they decrease value cocreation.\n", "role": "user" }, { "content": "Purpose: Coproduction, as one component of cocreation of value, offers many benefits to customers and management, but also requires customers to invest a considerable amount of effort and time. The purpose of this paper is to investigate the coproduction paradox of benefits and costs.\nDesign/methodology/approach: One experimental study and two cross-sectional field studies across three service industries test the nonlinear relationship between level of coproduction and customer loyalty.\nFindings: Results show not only the optimum level but also the negative effects of increasing levels of coproduction on customer loyalty and, in turn, monetary expenditures. The negative effect can be partially offset by perceived process enjoyment (PE), such that consumers who enjoy the process exhibit increased loyalty after the optimum coproduction point. Customer self-efficacy (SE), however, further strengthens the inverted u-shaped relationship.\nResearch limitations/implications: Further research should try to replicate the findings in more complex and less hedonic service settings (e.g. financial investments) because both PE and SE might be even more powerful here.\nPractical implications: Service managers need to determine the optimal degree to which customers want to engage in the creation of services and avoid overburdening them. Management should further explore opportunities to elicit feelings of fun and enjoyment through coproduction.\nOriginality/value: Research usually highlights the potential benefits of coproduction for customers and companies and suggests a positive linear relationship between coproduction and success outcomes. This article instead shows that after an optimum level, the marginal benefits of coproduction for customer loyalty turn negative.\n", "role": "assistant" } ]
Logical brand management in a dynamic context of growth and innovation
[ "Innovation", "Business development", "Risk management", "Brand management" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\nInnovation/ growth detection tools (from the firms' perspective): For many years tools have been developed which allow one to detect and define growth opportunities. They often start from the firm's perspective.One of the most popular is Ansoff's growth matrix, which defines four categories of growth by evaluating different product/market combinations (Ansoff, 1965):* market penetration (growing within the current product/market combination);* market development (entering new markets with existing products);* product development (developing new products within existing markets);* diversification (developing new products in new markets).Abell offers another interesting growth framework, focusing on the business scope, which is assessed at three dimensions: the customer needs, the target groups and the technologies used (Abell, 1980). It may be perceived in some sense as an extension of the product/market approach of Ansoff. In Abell's approach customer characteristics of target groups (who?) and customer needs (what?) refer to the market served and technologies refer to the product component (how?).Different methods have been discussed in the literature to measure the growth potential of different product/market combinations (PMCs). In the GE/McKinsey matrix for instance (Mandour et al., 2005, pp. 149-153), these PMCs are assessed at two dimensions: the attractiveness and the competitive strength. The first dimension covers external factors such as current and future market potential, number of competitors, etc. The second dimension covers internal factors such as core competences, intrafirm synergies, etc.More recently, the Ge/Mckinsey approach has been refined to three dimensions (see Win Worth analysis in Creveling et al. (2006)):* the real dimension (customer perspective: is there a potential for this new idea?);* the win dimension (process perspective: can we do this?);* the worth dimension (financial perspective: is is worth doing this?).Chris Zook (2004) argues that innovation should be applied in a context of adjacency moves, investing in current businesses, while simultaneously meeting new opportunities (see also Treacy and Sims, 2004). The idea of investing both in current and future businesses is also found in \"the three horizons\" approach of McKinsey (Baghai et al., 1999) and in the visonary marketing (roadmap) approach of Moenaert and Robben (2006).Potential adjacency moves may not only be assessed in terms of market or product extensions, but also in terms of other processes.1. New markets (as in the Ansoff approach):* new geographacies; and* new customer segments.1. New processes (covering more options compared to the Ansoff approach):* new products or services;* new distribution channel; and* new value chain links.Kotler and Trias de Bes (2003) make a similar distinction. However they add extra components in their market definition focusing on customer needs, motivations and contexts.1. New markets:* new target groups (who?);* new customer needs/motivations (what and why?); and* new customer contexts (when, where, with whom?).1. New processes:* products and services (what?); and* the other marketing mix instruments (how?).Looking for adjacency moves Kotler and Trias de Bes argue that a lateral move can be made in one dimension (for instance reaching a new market), while subsequently trying to close the gap that may be created in the other dimension (for instance the underlying processes necessary to reach that new market). Often a chain reaction may be induced by making these lateral moves. For instance, Heinz ketchup can make a natural move to mustard (taking into account the spontaneous \"functional\" associations as one part of its current brand equity). As mustard is often used in sandwiches, the market of sandwiches may become an interesting market as such (Bass, 2005).\nInnovation starting from the customer value (customer perspective): The previous approaches focused on processes and market definitions defined by the firm. Although the customer perspective is integrated in some of these approaches, for instance by assessing customer needs and contexts (as in the Kotler and Trias de Bes approach), it still does not start explicitly from the \"value\" of current offerings to the customer. Knowing customer needs and contexts is not the same as knowing how these needs and contexts are met by current offerings according to the customers' perceptions. Therefore another approach for detecting innovation or growth opportunities is to analyze and adjust the customer value. This approach may lead to a continuous redefinition of customer segments. This is referred to as value innovation (Kim and Mauborgne, 2004a).Table I summarizes the insights discussed so far.As can be observed in Table I growth and innovation can be achieved by making a lateral move at one of the following three levels:1. the customer value level;2. the process level; and3. the customer segment level.Govindarajan and Gupta (2001) refer to similar innovation roadmaps: redefinition of the customer base (the customer segment in this article), reinvention of the customer value and redesign of the value chain (the process level in this article).Of course when one lateral move is made at one level, all other levels should be aligned.Subsequently, we will focus on the option of detecting innovation and growth opportunities starting from customer value. To guide the discussion we will use a logical brand management model that was discussed in Logman (2004). It incorporates the three growth perspectives mentioned above and integrates principles of the balanced scorecard (Kaplan and Norton, 1992), BCG's value creation model (Bixner et al., 2000) and the \"servqual\" gap analysis model (Zeithaml et al., 1990). We also explore the logical consistency rules proposed in that model in a dynamic growth context.Customer value perspective\nCases illustrating the risk and opportunities of value innovation: Subsequently, the Dell and Red Bull case will be used to show some of the risks and opportunities in value innovation that were discussed in the previous sections.For instance, in the Dell case some of the characteristics of its successful \"direct marketing model\" may be difficult to sustain (see Figure 2).In order to keep an acceptable price for performance position, Dell should focus on quality (including performance and reliability), and complementary products and services (including sales and support), to improve the overall customer experience.From a customer value perspective it is important that Dell evolves from a pure product (commodity) focus to a solution focus, in which customized bundled solutions are offered. The key question is whether this shift towards integration and bundling of products will allow Dell to keep its current competetive advantages such as its its current efficiency. As indicated in a META group (2004) research paper standardization will be essential for the future of most IT players. This is also the key issue in Dell's innovation approach (see Pike and Abels, 2005). The META group study emphasizes that commoditization starts with standardization. Economies of scope and scale create a virtuous cycle: lower price drives broader scope of demand, broader demand drives economies of scale, and economies of scale drive lower prices.In a second stage, increased levels of standardization lead to increased levels of specialization. For instance, processors are being sold at different price, performance, power, and heat design levels. However, in offering customized solutions, Dell should try to rely on standard based and interoperable hardware and software components as much as possible.It is also obvious that in this \"build to integrate\" process Dell highly depends on its partnerships. Dell uses the insights of strategic partners such as Intel, Microsoft, Oracle, and EMC in developing its own innovation roadmap. In doing this it tries to provide customers with the latest relevant technology at an acceptable price. As was mentioned in this article, external brand drivers (related to suppliers, complementary players, ...) should be aligned with internal brand drivers. However, some of the external brand drivers (such as the quality of product components supplied by external partners) may not be fully controlled by the brand itself. Dell recently experienced this with certain batteries used in Dell Latitude(tm), Inspiron(tm), XPS(tm) and Dell Precision Mobile Workstation(tm) notebook computers (with cells manufactured by Sony). It turned out that under rare conditions, it is possible for these batteries to overheat, which could pose a risk of fire. In cooperation with the US Consumer Product Safety Commission and other regulatory agencies, Dell announced that it is voluntarily recalling these batteries and offering free replacements. But the question remains what will be the long-term damage of such incident.Besides the risks indicated, there may be interesting innovation and growth opportunities as Dell focuses its innovation and growth efforts on big customers (Boeing, Google, ...) and uses these customers as ambassadors to convince other customer segments of its innovation values. This may lead to positive inter-segmental effects (a topic that was extensively discussed in this article).The Red Bull case forms another interesting example, in which growth and innovation opportunities and risks may be shown, using principles of the conceptual model that was discussed.As an energy drink Red Bull may try to reach new customer segments by extending and refining the health dimensions of its current value concept: for instance suitable for diets, immunity problems, etc. (see Figure 3). However in following this path, it should pay attention that it does not lose a great deal of its current image (in the current customers' mind). That is the fact that is being associated with: a prestiguous brand, a brand with a certain history and tradition (myth), a modern brand, etc.Although Red Bull has always refused to associate itself officially with the way in which the brand appears to be most popularly consumed, as a mixer with vodka or other spirits, there is no doubt that its target group of users (18-15 year-olds) often seems to use it that way.This may negatively affect its image towards other new customer segments, such as long distance drivers, older people, ... (see also the principles of the brand hijack that were discussed in this article).If Red Bull would plan additional efforts in that direction, it certainly should pay attention to that aspect of its brand equity.In the same context, attention should be paid to some of its product and communication related activities.In trying to attract new customer segments, its packaging and design may have to be adjusted (as is already being done by some of Red Bull's competitors): for instance a bigger format for the thirsty sportsman, a gel version for meal replacement seekers...As their sponsoring activities are mainly focused on extreme sports (for instance F1 racing), emphasizing \"the Red Bull gives you wings\" idea, it may become difficult for Dell to sponsor \"less extreme\" activities in which other health dimensions are emphasized.It is obvious that finding a good balance between current and new efforts in trying to make adjacency moves to new segments is the key issue for Dell.\nSome final conclusions and managerial implications: When using a value innovation framework as proposed in this article, a crucial question remains which assumptions will either make or break the business (see Govindarajan and Timble, 2005, pp. 151, 172). When making adjacency moves to new customer segments/products, it may be interesting to look for similarity or analogy with the current segments/products. However, sometimes it may turn out that the analogy is partly or entirely based on superficial similarity (see also Gavetti and Rivkin, 2005).In pursuing innovation and growth, brand managers should always try to find the right balance between current and new marketing efforts, taking into account the consistency issues that were raised in this article. It should be noted that innovation starting from customer value (compared to other innovation approaches) allows the brand manager to perform a brand consistency audit at a more detailed level. This kind of audit may also be perceived as a risk management approach (as in Katsanis and Pitta, 2006) for evaluating innovation and growth options. Besides risks of value innovation, several opportunities were raised in this article. Focusing on new customer jobs in all stages of the buying and experience cycle and detecting benefit as well as cost drivers (both at a rational and emotional level) forms the key challenge for most brand managers.\n", "role": "user" }, { "content": "Purpose: - The aim of this paper is to propose a consistent framework that allows the brand manager to detect innovation/growth opportunities and risks.\nDesign/methodology/approach: - The paper is based on an extensive theoretical study of innovation and growth approaches combined with analysis of cases (Dell and Red Bull) and many practical examples. The different approaches are compared at three levels: the customer value, the process and customer segment level. It also uses principles of the logical brand management model, proposed in a JPBM (2004) article as a benchmark.\nFindings: - The paper shows that many frameworks still focus too much on the firm's perspective instead of the customer perspective. An approach is proposed in which growth and innovation starts from the customer value. It allows to detect potential opportunities and risks at a very detailed level.\nResearch limitations/implications: - The growth and innovation model proposed may be used in a business- to-consumer and a business- to-business context (as illustrated with the cases and examples).\nPractical implications: - Continuous adjustment of the customer value, by manipulating importance and satisfaction rates of different brand drivers leads to a continuous redefinition of contextual segments. In an effort to keep current customer segments and to attract new ones a balance must be sought continuously when adjusting the brand drivers. This kind of audit may also be perceived as a risk management approach for brand managers who want to evaluate innovation and growth options.\nOriginality/value: - This paper \"integrates\" many innovation and growth approaches into \"one\" consistent approach.\n", "role": "assistant" } ]
Modelling people's behaviour using discrete-event simulation: a review
[ "Performance measurement", "Discrete-event simulation", "Behavioural operations", "Human behaviour", "Simulation methodology" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Interest in behavioural operations management (OM) has a long history with Cummings (1977) stating that both behavioural and system constructs are crucial in studying, designing and evaluating production systems. Bendoly and Schultz (2006) presented a special journal issue on the topic of incorporating behavioural theory in OM empirical models. Within that journal issue, an article by Bendoly et al. (2006) provides a review of papers published between 1985 and 2005 on behavioural research in OM. They stated that the success of the implementation of OM tools and techniques, and the accuracy of its theories, relies heavily on our understanding of human behaviour. A \"follow on\" special issue is presented in 2013 (Croson et al., 2013). Here, the importance of the identification of individual differences rather than the average response of people is emphasised. A further overview of behavioural OM studies is provided in Zhao et al. (2013). Neumann and Dul (2010) reported on an increasing interest in human issues in OM and find through a systematic review of literature that operating system performance co-varies with the effects of human behaviour in 95 per cent of studies that looked at both elements in human factor applications. OM areas such as lean management, with its related topics of mass customisation, flexibility and agility stress the very important role of the workforce, seen as a main asset and a source of competitive advantage (Liker and Convis, 2011). The modern workplace is characterised by frequent product and process change in addition to worker turnover and absenteeism. In this environment, workers must adapt to learn new tasks frequently, while at the same time responding to demand in the shortest possible time (Nembhard, 2014). Workers in these systems will have specific skills and a level of flexibility, which are highly individual qualities and the treatment of all employees as identical in terms of their manual skills, ability to learn and ability to cope with difficult situations is a major simplification (Malachowski and Korytkowski, 2016).\n2. Method: Current reviews of the use of simulation are usually based within an application domain such as manufacturing (Negahban and Smith, 2014) or focus on a specific application (Gul and Guneri, 2015). In order to provide an overview of the nature and scale of articles presenting the use of DES to model people's behaviour in OM, and due to the lack of any current reviews of this nature, a systematic literature search was performed using the Scopus, ScienceDirect, Google Scholar and Emerald Insight databases. To focus on contemporary implementations of DES it was decided a priori that the search would be restricted to studies from 2005 to 2017. The identified keyword terms simulat* in combination with either discrete-event or discrete event was searched to identify studies in the domain of DES. These keywords were combined with the keyword terms human*, people, person, individual* to identify DES studies in the domain of modelling people's behaviour. Papers were filtered for relevance to the field of OM and providing simulation model implementation details by title and then at abstract and article level in accordance with inclusion and exclusion criteria (see Table I).\n3. Findings: In order to assess how people's behaviour was modelled using DES within the articles identified by the literature review we need to consider the initial phases of the simulation study process. Once the modeller has gained an understanding of the problem situation they will need to determine the simulation study objectives. These objectives then lead to the design of a conceptual model which provides a specification for the data requirements of the different areas of the model such as customer arrivals and process times. In a traditional simulation study methodology, these data requirements do not consider the issue of modelling people within the model specifically. Here, we use these generic definitions of the areas of data requirements and relate them to the areas in which we can model people's behaviour. Thus, based on the data requirements of a simulation model defined in Greasley (2004), the aspect of people's behaviour we may wish to model can be classified as following:\n4. Discussion: An important outcome of the review is the identification of a number of articles published presenting implementations of the modelling of people's behaviour. These articles cover modelling using DES in all the main areas of a model requiring data collection. The papers also cover many aspects of people's behaviour in operations including the modelling of stress, physical tiredness, competence, attendance, skill levels, capability and flexibility. In order to facilitate the incorporation of people's behaviour in OM models there now follows a discussion regarding the nature of the cost and workload required when modelling human behaviour. Key challenges will then be discussed in the areas of data collection needs and validation requirements for a DES when modelling human behaviour. There then follows a categorisation of the articles found and a methodology is presented to guide the use of DES to model people's behaviour in OM.\n5. Conclusions: Bendoly and Prietula (2008) discussed how modelling the behavioural dynamics observable among real humans in various tasks emblematic of OM practice can help refine, occasionally support, and often refute past modelling assumptions and hence present essential contributions that truly can help close gaps between operations research and effective management practice. This paper aims to advance the incorporation of behavioural responses in OM models by providing the following contributions. It brings together a record of the publications that demonstrate the use of DES to model people's behaviour in the OM domain. These publications show the relative scarcity of research in this area. This can be seen as due to the lack of model implementation detail provided in the literature and to the major challenges faced when modelling people's behaviour in the areas of data collection and model validation. The review also found a number of different strategies employed when modelling people's behaviour using simulation and the paper presents a new perspective in terms of a methodological approach to making a choice between these strategies in the context of the cost and effort required to implement them.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to provide a contribution to the area of behavioural operations management (OM) by identifying key challenges in the use of discrete-event simulation (DES) to model people's behaviour in OM.\nDesign/methodology/approach: A systematic literature review method is undertaken in order to assess the nature and scale of all publications relevant to the topic of modelling people's behaviour with DES in OM within the period 2005-2017.\nFindings: The publications identified by the literature review reveal key challenges to be addressed when aiming to increase the use of DES to model people's behaviour. The review also finds a variety of strategies in use to model people's behaviour using DES in OM applications.\nResearch limitations/implications: A systematic literature review method is undertaken in order to include all publications relevant to the topic of modelling people's behaviour with DES in the OM domain but some articles may not have been captured.\nOriginality/value: The literature review provides a resource in terms of identifying exemplars of the variety of methods used to model people's behaviour using DES in OM. The study indicates key challenges for increasing the use of DES in this area and builds on current DES development methodologies by presenting a methodology for modelling people's behaviour in OM.\n", "role": "assistant" } ]
Factorial surveys reveal social desirability bias over self-reported organic fruit consumption
[ "Consumer behaviour", "Surveys", "Organic", "Social desirability", "Vignette", "Factorial surveys" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: 1.1 Social desirability and consumer self-reports in research about sustainable food\n2. Methods: This research aimed to ask young consumers about which factors could make them more prone to lie, when surveyed about their preferences for organic fruit. Conventional questionnaires are inadequate for this task because counterfactual questions are cognitively demanding and because direct questions themselves might suffer from social desirability bias. Therefore, we decided to make respondents evaluate some scenarios describing a young consumer who was surveyed, in various field conditions, about his/her preferences for organic fruit. Then, we asked respondents whether they believed that the subject described in the scenario would answer honestly or not to the survey. A typical scenario (or \"vignette\") appeared as it follows.\n3. Results: The findings indicate that some intrinsic and situational factors characterizing surveys are perceived by respondents as critical for misreporting their preferences for organic fruit products (Table II).\n4. Discussion: The findings show that some major antecedents of socially desirable responding are perceived by young consumers as drivers of misreporting in surveys measuring their preferences for organic fruit. While these findings align with the existing psychological theory (Krumpal, 2013), they show that social desirability might affect self-reported estimates of consumers' preferences, even for the most underestimated foods, like an organic fruit. This conclusion extends the concerns about social desirability bias which stem from the viewpoint regarding the purchasing of sustainable goods as a tool for signaling a socio-economic status (Griskevicius et al., 2010). This study also introduces FSEs in research about sustainable food. Compared to other forms of conjoint surveys, FSEs do not have any underlying theory about preferences and a higher degree of internal validity, statistical efficiency and robustness against social desirability (Chung and Monroe, 2003; Hainmueller et al., 2015; Venkatachalam, 2004; Wason et al., 2002). This flexibility makes them suitable to explore different theoretical frameworks, and to integrate them whenever this is possible. In our study, they enabled us to test for the effect of empirical expectations (Bicchieri, 2016; Bicchieri and Xiao, 2009) over consumer misreporting, extending conventional theories about socially desirable responding. We also believe that FSE might be important to study the interplay between rational and irrational consumer making by consumers: while many green marketing studies emphasized the irrationality of consumers, consumers also make deliberate choices, guided by their values or other antecedents. Factorial surveys, through their complex vignette creation, might be good for this task and we encourage further research testing this potential.\n5. Conclusions: This research, conducted over a sample of under-30 university students, shows that social desirability is perceived as a factor that might lead consumer to misreport their preferences for organic fruit. Misreporting is deemed to depend from the individual need for social approval, and participants to our study believed that it was magnified by improper survey administration modes, where respondents do not have the opportunity to express their preferences anonymously. This situation probably originates from emerging social norms about sustainability, which govern sustainable lifestyles and consumption patterns in the younger generations.\n", "role": "user" }, { "content": "Purpose: Surveys measuring consumers' preferences for sustainable food might suffer from socially desirable responding. Social desirability stems in part from social norms about sustainable lifestyles, when respondents need approval from others and when privacy is not guaranteed during survey completion. While various studies showed this phenomenon through laboratory experiments and by comparing different modes of survey administration, no research adopted factorial survey experiments (FSEs) to measure which factors are perceived by consumers as critical for socially desirable answering. The purpose of this paper is to fill this gap, at least for young consumers in a case study with organic fruit.\nDesign/methodology/approach: In total, 143 under-30 respondents were involved in an FSE. Each respondent evaluated six hypothetical scenarios (n=858) describing a consumer surveyed about his/her preferences for organic fruit. Respondents indicated whether they believed participants would have answered honestly or not to the survey described in each scenario. Generalized linear mixed models were used to model how scenario attributes were perceived to influence honest answering.\nFindings: Respondents believe that people are more prone to bias their answers the more they seek approval from others. Moreover, the presence of acquaintances during survey completion is another critical driver of survey misreporting.\nOriginality/value: This study, by using a novel robust quasi-experimental approach, confirms that social desirability could lead consumers to misreport their preferences when surveyed about an organic fruit. This confirms that well-designed surveys, adopting proper remedies for social desirability should be adopted even for those food products, like fruit, which are usually deemed to be less subjected to misreporting. It also introduces FSEs as a flexible tool for collecting insights from consumers about potential antecedents of their behavior.\n", "role": "assistant" } ]
Managing hazards of the make-buy decision in the face of radical technological change
[ "Exchange hazards", "Hierarchical hazards", "Make or buy sourcing decision", "Prior exploratory technological experience", "Radical technological change", "Retention of outsourced component knowledge" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: There has been growing interest in the relationship between radical technological change and the make-buy component sourcing decision and its impact on performance in the technology management literature (Grant, 1996; Kogut and Zander, 1992; Monteverde, 1995; Nickerson and Zenger, 2004; Powell et al., 1996; Spencer, 2003). To facilitate a better understanding of the subsequent theoretical arguments presented in this paper and the relevant literature, we provide the following illustration. Consider as the unit of analysis a product that is comprised of two components: C1 and C2. Two separate firms, A and B, produce this product. The product then faces a radical technological change in the market that significantly alters components C1 and C2 and the linkages between them. To address this new change, both firms A and B make C1 internally. However, while firm B employs \"a make strategy\" (i.e. makes C2 internally) and firm A employs \"a buy strategy\" (i.e. buys C2 through an arm's-length market transaction).\nTheory and hypothesis development: Radical technological change--US mountain bicycle gear shifting market\nMethod: Sample and data collection\nResults: The summary statistics and correlation coefficients for the variables in this study are displayed in Table I.\nDiscussion and conclusion: Theoretical contribution\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to consider the management of hazards arising from the make-buy choice in the face of radical technological change. This sourcing choice can lead to distinctive exchange and hierarchical hazards. This study's main interest is in investigating the research question \"How can firms reduce those distinctive exchange and hierarchical hazards arising from the make-buy sourcing strategy when dealing with radical technological change?\"\nDesign/methodology/approach: The author develops hypotheses that the in-house retention of outsourced component knowledge will likely reduce exchange hazards arising from the buy strategy choice. And prior exploratory technological experience will likely reduce hierarchical hazards arising from the make strategy choice. The author explores the US mountain bicycle industry from 1980 to 1992 to test the developed hypotheses. For endogeneity arising from the make-buy sourcing decision, the author uses Heckman's two-stage switching regression model.\nFindings: The major findings are that the in-house retention of outsourced component knowledge and prior exploratory technological experience is distinctive moderating factors improving performance of a buy strategy and a make strategy, respectively.\nOriginality/value: Since the extant literature tends to focus on which of the two sourcing strategies provides the greatest performance advantages in the face of radical technological change, there is a strong implication to suggest that if a firm performs poorly with one sourcing decision, the firm should switch to an alternative one. Different from the expositions of the literature, this study elevates the understanding regarding how firms can improve the performance of their current sourcing orientation rather than whether they should switch from one sourcing strategy to another.\n", "role": "assistant" } ]
The effect of corporate governance mechanisms on European Mergers and Acquisitions
[ "Europe", "Corporate governance", "Event study", "Acquirer abnormal returns", "Mergers and acquisitions" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The main drivers that force firms to engage in Mergers and Acquisitions (M&As) are the creation of synergies, the assimilation of new competencies and the diversification of business risk, which in turn strengthen corporate growth and enhance corporate profitability (Alexandridis et al., 2010). This illustrates that M&As may act as an efficient way for firms to expand into new markets and increase the wealth of their shareholders (Papadakis, 2005). In that context, during the past decades academic and market research has extensively studied the wealth effects deriving from the activity of M&As (Alexandridis et al., 2010; Alexandridis et al., 2017; Masulis et al., 2007). One of the main findings of the literature is that M&As often enough fail to generate value for the shareholders of acquiring firms. Thus, one of the purposes of this study is to examine the economic impact of M&As on European acquirers.\n2. Related literature and challenges in corporate governance statements: Our study relates to at least two strands of literature literatures (i.e. M&As and CG). The first is research on the economic impact of M&As on the shareholders of acquiring firms. The second is a literature focused explicitly on the effects of CG mechanisms on the firm's effectiveness. In this section, we review the related prior literature and describe how our study could contribute to the challenges of CGSs.\n3. Data and methodology: 3.1 Sample\n4. Empirical findings: This section begins with a discussion of the results of the event study, followed by a discussion of the results of the cross-section regression analysis for the acquirers' excess returns, including implications of the CG measures.\n5. Conclusion and recommendations: The corporate finance literature fails to reach any sort of consensus on the economic impact of M&As. From that perspective, one of the purposes of this study is to assess the effect of M&A announcements by European acquiring firms on their stock price. Our results indicate that acquirers realize significant positive excess returns in short event windows surrounding the announcement day. In addition, we find that these returns are affected by certain CG mechanisms such as board size, voting rights and ATPs. More specifically, boards consisting of 8 directors and more are negatively related with the acquirers' abnormal returns. Our evidence implies that smaller boards are more effective than oversized ones, which means that they communicate better and are more efficiently coordinated. Moreover, we show that higher voting rights generate higher gains for the acquirers, given that increased shareholder voting rights enable shareholders to play a more active role in the managerial decision-making, supporting value-enhancing strategic decisions. In the same direction, we suggest that ATPs are positively related with the abnormal returns of bidders, considering that they allow firms to protect their autonomy and to have sufficient time to make rational investment decisions. The different effect of ATPs on European firms relative to their US counterparts may be explained by the different legislative framework and the level of competition of the market for corporate control between the USA and Europe.\n", "role": "user" }, { "content": "Purpose: The purpose of this study is twofold: First, to assess the economic impact of Mergers and Acquisitions (M&As) on European acquiring firms from the beginning of the sixth merger wave onward. And second, to investigate the effect of CG mechanisms such as board size, voting rights and anti-takeover provisions (ATPs) on acquirers' gains, along with a set of control variables.\nDesign/methodology/approach: For the purpose of the study, the authors use a sample of 349 completed M&As across all business sectors between European firms from 01/01/2003 to 31/12/2017. Abnormal returns are estimated by applying an event study methodology, and the effects of CG mechanisms are assessed with univariate and multivariate cross-sectional regressions.\nFindings: The authors present evidence that acquirers realize significant positive excess returns upon the announcement of M&As. The authors find past profitability to be a strong indicator of value creation, while most of the traditional firm-specific and deal variables fail to interpret the results. The authors' analysis indicates that the examined CG measures have a significant effect on acquirer's gains. More specifically, the authors find that boards in excess of eight directors are negatively related to announcement-period abnormal returns. In contrast, the wealth effects for acquiring firms are positively related to shareholders' voting rights and/or to the number of ATPs. The estimated coefficients of all three CG mechanisms are statistically significant across alternative model specifications.\nResearch limitations/implications: A clear implication is that the existence of certain CG mechanisms leads to value-enhancing strategic decisions for European acquirers. In terms of policy direction, the authors' findings assist practitioners and/or national and transnational institutions in perceiving the efficacy of certain CG practices.\nPractical implications: This study indicates that Corporate Governance Statements (CGSs) fail to provide adequate information to investors to understand in-depth the CG mechanisms that companies apply. Thus, the authors recommend that CGSs should provide not only narrative information but also information that may generate value for shareholders and other stakeholders as well. Such information should be qualitative and/or quantitative in nature and be made available to market participants to support their decision-making.\nOriginality/value: To the authors knowledge, this is the first study that investigates the effect of CG on the economic impact of M&As for European acquirers, using three widely examined CG mechanisms, namely, the board size, the voting rights and the ATPs. The authors' empirical findings form the basis for further examination of the linkage between M&As and CG, with the intention of establishing the appropriate CG framework that will ensure shareholder wealth creation. This line of research could produce new insights in the field, allowing investors and policymakers to appreciate the benefits of effective CG.\n", "role": "assistant" } ]
Organizing for teamwork in healthcare: an alternative to team training?
[ "Organizational design", "Team training", "Teamwork", "Interprofessional care" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Interest in healthcare teamwork has increased in the last decade. Consequently, healthcare professionals have often found themselves the subjects of organizational interventions to improve or implement teamwork, and are expected to work efficiently over the boundaries of traditional professions. An underlying assumption is that teamwork can make healthcare more efficient (Reeves et al., 2010). Previous research shows that teamwork improves patient care, patient safety, and organizational effectiveness, and increases job satisfaction (Welp and Manser, 2016; Kalisch et al., 2010; Manser, 2009; Lemieux-Charles and McGuire, 2006). It has also been shown that the lack of adequate team behaviors can increase the likelihood of patients experiencing major complications after surgery (Mazzocco et al., 2009).\nBackground: The state of teamwork in healthcare\nMethod: The healthcare literature contains a myriad of team definitions, and there is currently no real consensus about what a team really is (Bleakley, 2013). As the intent in this conceptual paper is to show how teamwork could be promoted or established by organizational means, a wide and pragmatic definition is applied. A team is considered to be a group of people who are set to work together on a task, where the task could be, for example, attending to a specific patient or a group of patients. What this group does in relation to their task is here considered teamwork, with a specific focus on how the work is done rather than on what constitutes the task itself (Marks et al., 2001). Many teamwork definitions specify qualities that should be present in order for it to be real teamwork, for example, clear roles, interdependence, shared identity, shared decision making, effective methods to accomplish shared goals, and so on (Reeves et al., 2010; Xyrichis and Ream, 2008; Wheelan, 2005). However, there is room for discussion of whether these factors in reality refer to qualities of teamwork or to qualities of the team that conducts the teamwork. These kinds of qualities are useful if the goal is to investigate the quality of teamwork or teams; however, the present aim is rather to investigate how teamwork could be improved. Thus, it is important to apply a team definition that includes not only already-successful teams, but prospective teams as well. As the definition above highlights that team members should work together on the task, it also implies that their work is to some degree interdependent.\nFactors that enable teamwork: There is already some previous research concerned with organizational characteristics considered beneficial for teamwork in healthcare, in the form of reviews and conceptual papers (Weller et al., 2014; Wahr et al., 2013; Lakhani et al., 2012; Ezziane et al., 2012; Manser, 2009; Xyrichis and Ream, 2008; Lemieux-Charles and McGuire, 2006; Mickan and Rodger, 2000). However, the combination of a holistic perspective on teamwork and a pragmatic, actionable approach is largely missing. The present paper therefore focuses on organizational design principles that are directly actionable; that is, possible to change by management. This does not mean that organizational characteristics such as an appropriate culture are unimportant (Mickan and Rodger, 2000), only that they are difficult to improve. Other issues with the organizational characteristics identified in previous research are that they are either too general or too trivial. Overly general characteristics are those that are too vague; for instance, Lemieux-Charles and McGuire (2006, p. 289) state that \"organizational context influences team effectiveness,\" but in order to be actionable there is also a need to conclude what a desirable context consists of and identify those aspects of it that are possible to change. Overly trivial characteristics are those that are already present for most healthcare teams, such as the need for a clear purpose, a specified task, relevant team members, and distinct roles (Mickan and Rodger, 2000). Most operating, intensive care or emergency teams have a clear purpose, a specified task, relevant team members and distinct roles, to list a couple of examples from interprofessional care contexts. Thus, a discussion of these factors would add very little. These factors also have in common that they are not really process characteristics but rather inputs in the IPO model.\nOrganizational design principles that promote the six teamwork enablers: Certain themes have reoccurred in the theoretical investigation of the teamwork enablers above, and it is obvious that the teamwork enablers are not totally independent of each other. Three organizational principles with a potentially positive effect on teamwork have been identified: team stability, occasions for communication, and a participative and adaptive approach to leadership. These three organizational principles will be discussed in order below, and their relationship with the different teamwork enablers is illustrated in Figure 1.\nDiscussion and conclusions: In terms of the IPO model, the approach presented here is concerned with how inputs to the teamwork process on the group and environmental level could improve the process. On the other hand, traditional team training is mainly concerned with inputs on the individual level; that is, the non-technical skills of the individual as input to the teamwork process (Hackman and Morris, 1975).\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to explore how organizational design could support teamwork and to identify organizational design principles that promote successful teamwork.\nDesign/methodology/approach: Since traditional team training sessions take resources away from production, the alternative approach pursued here explores the promotion of teamwork by means of organizational design. A wide and pragmatic definition of teamwork is applied: a team is considered to be a group of people that are set to work together on a task, and teamwork is then what they do in relation to their task. The input - process - output model of teamwork provides structure to the investigation.\nFindings: Six teamwork enablers from the healthcare team literature - cohesion, collaboration, communication, conflict resolution, coordination, and leadership - are discussed, and the organizational design measures required to implement them are identified. Three organizational principles are argued to facilitate the teamwork enablers: team stability, occasions for communication, and a participative and adaptive approach to leadership.\nResearch limitations/implications: The findings could be used as a foundation for intervention studies to improve team performance or as a framework for evaluation of existing organizations.\nPractical implications: By implementing these organizational principles, it is possible to achieve many of the organizational traits associated with good teamwork. Thus, thoughtful organization for teamwork can be used as an alternative or complement to the traditional team training approach.\nOriginality/value: With regards to the vast literature on team training, this paper offers an alternative perspective on how to improve team performance in healthcare.\n", "role": "assistant" } ]
The influence of corporate social responsibility on travel company employees
[ "Employees", "Corporate social responsibility", "Social exchange theory", "Work engagement", "Intention to stay", "Innovative behaviour" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Today's businesses are expected to be more transparent about, and responsible for, their activities and impacts. Thus, many companies exercise corporate social responsibility (CSR), which usually refers to the company's voluntary programs to help enhance environmental and societal conditions toward a more sustainable world (Park and Levy, 2014). CSR can be also instrumental to achieving company's financial performance (Boesso et al., 2015) through good reputation (Maignan et al., 1999) and improved relationships with stakeholders (Rayton et al., 2015). Accordingly, CSR is inherently in accordance with stakeholder theory in that businesses can do well or better when they consciously take into consideration their stakeholders' interests in operations and decision-making (Jones, 1995).\nLiterature review and hypotheses development: Corporate social responsibility and individual employees\nMethod: Study site\nResults: Respondent characteristics\nConclusion and discussion: Conclusion\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to examine how employees' perceptions of corporate social responsibility (CSR) influence their work engagement (WE), innovative behavior (IB) and intention to stay (IS) with their company, to illuminate the role of CSR at the individual employee level.\nDesign/methodology/approach: The data for this study are responses of 455 employees to an online survey at the largest travel company in South Korea. Confirmatory factor analysis and structural equation modeling in Mplus 7.3 are used to analyze the data.\nFindings: Results show that employee perceptions of CSR regarding customers and employees significantly and positively influence their WE, which in turn positively influence their IB and IS. Moreover, WE mediates these relationships.\nResearch limitations/implications: Results of this study may not represent the entire travel industry or the country. This study's model should be tested in other companies and countries. Additionally, longitudinal studies will help understand how employees' perceptions of CSR and their effects on work attitudes and behavior change over time.\nPractical implications: CSR can be an important tool for developing social capital within an organization by increasing employees' engagement at work. Higher WE can help employees exert more efforts toward their company's innovation and stay longer with their company, which can contribute to the company's performance and competitiveness.\nOriginality/value: This study develops and empirically tests a theoretical model based on various disciplines. It extends existing CSR studies by examining the effects of CSR on WE and the mediation effects of WE, which have been rarely explored. This further explains how CSR influences employees' attitudes and behavior that benefit a company's competitive advantage to shed light on the resource-based view about employees being an important resource.\n", "role": "assistant" } ]
Six Sigma DMAIC project to improve the performance of an aluminum die casting operation in Portugal
[ "Six Sigma", "DMAIC", "Design of experiments (DOE)", "Low pressure die casting" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Six Sigma is a project-based, customer-focused, and data-driven approach (Black and Revere, 2006) that makes use of a structured and systematic methodology, which relies on the define, measure, analyze, improve, control (DMAIC) roadmap for continuous improvement, to enhance the capability of business processes (El-Haik and Al-Aomar, 2006). To be effectively implemented, this problem-solving methodology requires full management belief and commitment, a well-trained and experienced technical project leadership to act as a change-agent, and a dedicated and competent multifunctional project team (Goldstein, 2001).\n2. Six Sigma: Six Sigma has been labeled and defined in a variety of ways, but it is often described from the following three viewpoints:\n3. Six Sigma in Portugal: Information about the adoption of Six Sigma in Portugal is scarce. This section aims to contribute for a better characterization of the initiative in the country. To that purpose, a specific literature review was conducted, and relevant data and information were compiled and analyzed.\n4. Case study: This section describes a Six Sigma project developed at Sociedade Transformadora de Aluminios, S.A., from now referred as STA, an organization that is specialized in the design and production of systems for doors and windows, particularly for aluminum profiles. STA has implemented ISO 9001, it is certified ISO 9001 by TUV Rheinland since 2006, and uses both Lean and Six Sigma for their continual improvement. Projects are selected using a matrix that rates them in function of different parameters linked to STA's strategy such as: environmental, safety, time reduction, savings, and payback in case of an investment is required.\n5. Conclusions: After providing a review about the state of implementation of Six Sigma in Portugal, this paper describes the activities that were carried out to select, plan, and execute a continuous improvement Six Sigma project based on the DMAIC roadmap at a Portuguese company, STA - Sociedade Transformafora de Aluminios, S.A. The define phase allowed the team to better understand the problem, and determine that most of the defects had origin at the die casting operation. Further data were collected in the measure phase by means of production tests, which led to the estimation of the performance baseline of the manufacturing process. In the analyze phase of DMAIC approach, a set of studies, including DOE, were conducted to identify the root causes of the die casting defects. Some of the relevant causes had origin in the gravity die casting operation itself, but others had origin in downstream operations where alloy mixings tended to occur, thus jeopardizing the mechanical propertied of the AG4Z alloy. The improve phase involved the development and implementation of a set of improvement actions that were grouped around four areas of action. The introduction of the improvement actions significantly decreased the rejection rate of the Ho-2AG handle. The effectiveness of the improvement actions were also confirmed by the significant enhancement on the values of other metrics.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to illustrate an application of the Six Sigma define, measure, analyze, improve, control (DMAIC) problem-solving methodology in a Portuguese industrial enterprise, aimed at reducing the rejection rate of a specific manufactured product due to defects generated in an aluminum gravity die casting operation.\nDesign/methodology/approach: Along the five-stage roadmap, a set of analytical and conceptual tools was systematically employed to better characterize the problem, define the product's critical to quality characteristics, estimate the process baseline, determine the relevant cause-and-effect relationships, identify the root causes leading to the high rejection rate, implement an improvement plan, determine the effectiveness of the improvement actions, and put in place a control plan.\nFindings: The project team discovered that the high rejection rate was caused by factors inherent to the gravity die casting operation itself and by the mixing of rejected parts made of different types of alloys from other locations within the plant. Another key finding was that the successful execution of the project was only possible due to the belief and support of top management and to the active involvement of the team members.\nPractical implications: This case study illustrates a successful practical application of a Six Sigma project in a small-medium enterprise in Portugal, as well as the operational and financial benefits that derived from it; thus providing a good example for others to follow.\nOriginality/value: Few cases of continual improvement initiatives, including Six Sigma projects, developed at a Portuguese company are available in the literature; this paper fills such void by describing a very successful application that demonstrates the advantages that other companies can learn in terms of adopting structured methodologies to improve the quality of their products and the efficiency of their processes. Moreover, the analysis and conclusions herein presented can be of great importance for companies using gravity die casting technology.\n", "role": "assistant" } ]
Faceted classification for museum artefacts: A methodology to support web site development of large cultural organizations
[ "Classification", "Museums", "Collections management", "Artefacts", "Organizational culture" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Nowadays, thousands of museums give access to their collection all over the world through their web sites. The educational potential of these resources is immeasurable. Unfortunately, these web sites are often too complex and sophisticated to be useful and usable for the majority of visitors. According to Marty and Twidale (2004), many factors can overwhelm the museum web site user: overloading amount of multimedia content, very artistically designed graphical user interfaces, interfaces that only encourage exploration but not keyword research, sites designed poorly for museum professionals, and sites oriented to supplement physical museums. Too frequently, the needs of the users of these museum web sites are overlooked in the organization of the museum content. In order to minimize the aforementioned frustrations and difficulties, faceted classification presents great potential in providing the users with simple, flexible and scalable information access points. However, there are few methodological tools available for museums who wish to adopt such an approach in the development of their web sites.\n2. Objective of the project: The research project aimed to provide a visual representation of the Artefacts Canada digital collection, as well as a means for the majority of users to browse this content. Artefacts Canada Humanities is a database containing approximately 3.5 million records describing the different collections of Canadian museums. This database is at the core of the CHIN's (Canadian Heritage Information Network) work to make Canada's heritage accessible. CHIN's mission is to connect Canadians and worldwide audiences to Canada's heritage and to promote the development, the presentation and preservation of Canada's digital heritage content for current and future generations.In order to provide a new visual representation for this digital collection, a faceted taxonomy model was created. The taxonomy creation involved the implementation of an innovative methodology.\n3. Methodology: A four-step methodology was adopted for the development of the faceted taxonomy model:1. Best practice review: an extensive analysis of existing terminology standards in museum communities and public web interfaces of large cultural organizations was performed.2. Domain analysis: extracting and comparing relevant concepts from terminological authoritative sources.3. Term clustering and entity listing: the breaking up of the taxonomy domains into potential facets.4. Incremental user testing: to validate and refine the taxonomy components (facets, values, and relationships).The four phases of the methodology are described below. The main results are then presented.3.1 Best practice review\n4. Results: The first part of the best practices analysis entailed the review of terminology standards and background documentation used in the Canadian museum community. We examined 13 relevant resources. This review was based on two specific dimensions: the coverage, concepts, and categories used to describe museum collections, and the general best practices and applications used for a specific museum collection. From our analysis, we observed that the majority of the reviewed resources covered many different aspects of the museum's collections. The depth and complexity of these aspects vary significantly from one resource to another, depending on many factors such as local policies and user needs. According to the first phase of our analysis we can divide the elements used to describe the museum's collections into two broad categories:1. Descriptive elements, i.e. elements which describe the content of the object.2. Contextual elements, i.e. elements which describe the context and the form of the object.Our analysis also brought to attention that the 13 examined resources present different approaches to organize information. In fact, the majority of the examined resources provide a hierarchical structure, i.e. using first-level elements and in most cases, second-level elements, third-level elements, and so on. According to the examined resources, there is no standard number of hierarchical levels. Moreover, the hierarchical structures are in many cases very deep while very limited in others. Consequently, we did not distinguish a particular tendency in the organization of the second level of the identified first level elements. When examining the 13 mentioned resources, we observed that six resources exist only in English while five of them are bilingual. Consequently, an individual having little knowledge of the English language and wishing to use these resources will face a major linguistic problem unless a translation mechanism is offered to facilitate their use. Since the target audience of the future taxonomy is bilingual and most resources used by the museum only exist in English, it was essential to consider the linguistic aspect in the development of the taxonomy.For the second part of the best practices analysis, 16 selected public web interfaces designed for the collections databases of large cultural organizations were examined. This analysis was mostly based on standards used to describe online collections yet also considered the many aspects of usability including search and browse functionalities offered by these web sites. The examination of the 16 web sites revealed a great variability in the organization of the information. The majority of the sites have a tendency to organize the information in a hierarchical structure, mostly with a first level structure (including two to nine first-level facets) and second level elements more or less developed according to the collections offered online. However, the examination of these web sites did not determine a particular tendency in the organization of the second level of the identified first level elements. Finally, we noticed that some web sites organize the access to their collections according to the targeted public (young children, students, etc.). The web site is therefore a medium for information sharing and offers a platform for representing and promoting the collection of the museum. Interface and information usability refers to the ease in which the web site can be used by its intended audience to achieve defined goals and must take into account factors such as information structure, design, and search functionalities to ensure the best access possible.The examination of the 16 mentioned web sites again revealed disparities from one museum to another, particularly in how information is organized, interfaced, and the existing search and browse functionalities. Consequently, the users of several web sites can quickly be overwhelmed by web sites which are especially dedicated to museum professionals (terminology or categorization of the information being too difficult to understand for a non-professional) or by web sites offering an interface that requires an extended learning process. Finally, the majority of the web sites examined (12 of 16) only offer English interface which greatly limits the target audience. In brief, the analysis of the best practices provided recommendations regarding the development of the Artefacts Canada taxonomy. The recommendations focused on the top-level facet definition and their organization for the future taxonomy users.Domain analysis consisted of comparing the fields used in the Artefacts Canada database with those used in the resources analyzed during the best practices review. The conceptual and statistical analysis of the frequency reports allowed carrying out a categorization of the fields (i.e. to classify the fields) into five categories:1. classifying fields;2. fields relating to the description and the composition of the object;3. fields relating to time;4. fields relating to the place of creation and use of the object; and5. fields relating to the artistic, cultural and academic context in which the object was created.It also allowed considering several conceptual regroupings and subdivisions. Finally, the domain analysis led to the identification of the existence of a relation between several fields and the relevant fields for later facet identification. Resulting from domain analysis, we were able to limit the scope and the coverage of the future taxonomy while insuring the taxonomy coverage would be accurate and sufficient, explore and detect potential facets and provide general recommendations on first-level facet selection.Three essential steps were followed in the development of the classification structure. First, the taxonomy domain was broken up into elemental concepts. These concepts were gathered into fundamental classes. These classes were the basis from which facets were selected. Second, the facets were organized into a coherent structure realized simultaneously with the choice and definition of top-level facets and their sub-facets. The number of top-level facets, as well as the depth of the taxonomic structure, was restricted. The number of first level facets was limited to seven and the level of depth reduced to four, developing up to five in some instances. Third, the last stage of the facet and sub-facet selection process entailed the normalization of the facets and their descriptors using authoritative sources. The facets and their descriptors were first developed in English while keeping track of French resources when available.The results of the user testing are twofold. Card sorting results were compiled and analyzed in order to compare the piles created by the users and the naming assigned to each pile. According to the results obtained from the card sorting exercise and from user comments, several changes were brought to the first draft of the taxonomy. For example, the facet \"Subject\" was renamed \"Subject Represented\", the facet \"Time\" was merged with \"Style & Period\" and the discipline facet was \"flagged\" as a difficult facet for users, requiring further validation during the second phase of the user validation process (see Figure 1).The second phase of user testing revealed that users mostly focus on facets such as \"Geographical Location\", \"Style & Period\" and \"Object Type\" during the retrieval simulation. But this does not indicate that the other facets were not efficient, but rather that these choices were mostly driven by the queries designed to generate the browsing. Due to time limitation, it was not possible to explore all the possibilities offered by the taxonomy. However, these results highlighted the value of the new browsing interface. The number of facets used by the participants for multiple-concept queries was also an interesting outcome of the experimentation. The results obtained indicated that users are willing to search objects through a combination of categories. In other words, the user testing emphasized the enormous potential of facet classification and the endless possibilities offered to the users to browse the database with multiple entry points. The user testing also demonstrated that the approach in designing the new taxonomy answered the fundamental questions of who is doing something, what is being done, how it is being done, by what means, where and when it is being done. Each first level facet is a specific entity that can be employed individually as well as in combination. Finally, we were particularly encouraged by the response of the participants stating the structure's ease of use and confirming their enthusiasm to browse the Artefacts Canada database with this taxonomy.The final version of Artefacts Canada taxonomy includes seven facets.As depicted in Figure 1, three main perspectives have oriented the choice of the top-level facets:1. Context facets: facets relating to the set of facts or circumstances surrounding the creation of the artefact.2. Physical property facets: facets that constitute properties characterizing the apparent artefact's nature, form or function.3. Motif facet: facet representing the unifying idea being communicated or rendered by the artefact.These perspectives constitute the rationale behind the designation of the seven top-level facets aiming to describe an artefact from its essential angles. Each of the seven facets is briefly defined in Table I.\n5. Discussion and conclusion: Following this analysis of museum resources and web sites, we recommend that facet organization should be based on: facilitating the learning process, helping users to remember how to use the web site, reducing the likelihood of errors, and increasing efficiency and effectiveness in order to encounter a higher degree of user satisfaction. More specifically, we also recommend that the general organization of taxonomies should take into account the users' perspective in order to increase the level of satisfaction. To that end, taxonomy developers should limit the number of choices presented to the users in order to avoid frustration and increase the degree of satisfaction. Finally, we suggest that the interface, searching, and browsing functionalities in French and English should be as simple as possible in order to improve the efficiency and the effectiveness of the user searches.In the 1970s and 1980s, many cognitive scientists studied the way in which people naturally classify and categorize things. Rosch (1978) defined two principles of categorization. The first is \"cognitive economy\", which states that classifications exist in order to provide the greatest amount of information with the least amount of cognitive effort. The second principle is the \"perceived world structure\", i.e. the world as we perceive it gets delivered to us as structured information, not in an arbitrary manner. Following the analysis of several resources and web sites, we recommend organizing the information in a manner which offers a cognitive economy in addition to a definite hierarchical structure. For that aim, the taxonomy resulting from this project includes essential facets, both in English and in French, which appropriately represent the resource in order to increase the effectiveness and the efficiency of the user browsing and searching; facets which present the appropriate level of granularity in order to improve the effectiveness and the efficiency of the user searches; and finally, clear and available definitions in order to enhance the satisfaction of the taxonomy by users.The resulting bilingual and expandable vocabulary structure will further be used to describe the Artefacts Canada database records. Three main perspectives have oriented the choice of the top-level facets: the context facets, i.e. facets relating to the set of facts or circumstances surrounding the creation of the artefact; the physical property facets, i.e. the facets that constitute properties characterizing the apparent artefact's nature, form or function; and the motif facet, i.e. the facet representing the unifying idea being communicated or rendered by the artefact. These perspectives constitute the rationale behind the designation of the seven top-level facets aiming to describe an artefact from its essential angles. This structure offers a high-level and flexible representation of the information accessible in the database. The new taxonomy also helps represent the complexity of the content and to group objects into similar concepts to classify all records of the Artefacts Canada database. The user-friendly bilingual taxonomy will provide worldwide visitors with the means to better access Canadian virtual museum collections. For practitioners, the methodology developed within this project is a direct contribution to support web site development of large cultural organizations.\n", "role": "user" }, { "content": "Purpose: - This research project aims to provide a new visual representation of the Artefacts Canada digital collection, as well as a means for users to browse this content. Artefacts Canada Humanities is a database containing approximately 3.5 million records describing the different collections of Canadian museums.\nDesign/methodology/approach: - A four-step methodology was adopted for the development of the faceted taxonomy model. First, a best practice review consisting of an extensive analysis of existing terminology standards in museum communities and public web interfaces of large cultural organizations was performed. The second step of the methodology entailed a domain analysis; this involved extracting and comparing relevant concepts from terminological authoritative sources. The third step proceeded to term clustering and entity listing,which involved the breaking-up of the taxonomy domains into potential facets. An incremental user testing was also realized in order to validate and refine the taxonomy components (facets, values, and relationships).\nFindings: - The project resulted in a bilingual and expandable vocabulary structure that will further be used to describe the Artefacts Canada database records. The new taxonomy simplifies the representation of complex content by grouping objects into similar facets to classify all records of the Artefacts Canada database. The user-friendly bilingual taxonomy provides worldwide visitors with the means to better access Canadian virtual museum collections.\nOriginality/value: - Few methodological tools are available for museums which wish to adopt a faceted approach in the development of their web sites. For practitioners, the methodology developed within this project is a direct contribution to support web site development of large cultural organizations.\n", "role": "assistant" } ]
Reading digital texts: obstacles to using digital resources
[ "Media", "New literacies", "English and media" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: At Harrison High (all names are pseudonyms), a large public high school in a high poverty area in the northeastern USA, two of the 12th-grade language arts classes used an entirely digital textbook, and all work was completed on Chromebooks. At the start of each lesson, when students picked up their Chromebooks, some quickly located the documents they were working on, then opened additional tabs to pass time while waiting for the lesson to begin. These tech savvy students sometimes read or play games, but they most often watched videos. From observing this tendency, it became clear that the way students engaged with digital environments in the time before formal instruction was very different from the way they were being asked to engage with digital environments in their learning.\nBackground to the study: The New London Group (1996) observed that technological change initiates new \"technologies of meaning\" (p. 64). For English and language arts teachers, the significance of this argument is that if the way meaning is made is changed by technological innovation, then teaching has to adapt.\nStudy procedures: Context of the investigation\nFindings: While there are, on one hand, the voices of theorists arguing for the significance of differences between multi-modal and conventional print texts, the overarching finding of this study is that students do not treat those differences as significant and indeed, they did not make use of the resources provided by digital texts.\nDiscussion: In 1996, the New London group challenged educators to \"account for the burgeoning variety of text forms associated with information and multi-media technologies\" (p. 61). Almost quarter of a century later, it seems clear that for these students that challenge still needs to be taken up.\n", "role": "user" }, { "content": "Purpose: Digital texts are increasingly widespread and research is needed on how students use digital texts, particularly in school-based classwork. The purpose of this study is to challenges persistent myths about young people's affinity with digital tools by investigating the factors that condition or limit the ways students interact with and respond to digital web-based texts.\nDesign/methodology/approach: Two 12th grade English classes, 21 students in all, produced written responses to a digital text. Following a preliminary analysis of student writing, 7 students, representing diverse reading backgrounds, were interviewed. Content units were used as a unit of analysis for both interviews and written data. Following open coding, 14 initial codes were condensed into 3 categories, namely, routines, tools and beliefs.\nFindings: This study suggests that for students to see digital textual resources as significant, they must be guided to engage with these features. Classroom routines, the tools used in teaching and learning reading and the beliefs students hold about their school-based reading can restrict student uptake of digital features of texts. Instruction must be adapted to include teaching on digital texts.\nOriginality/value: Students interact with digital texts daily, yet explicit teaching on reading digital texts presents particular demands. This study contributes to understanding the challenges faced by students and teachers when they grapple with texts in different forms.\n", "role": "assistant" } ]
Influence of culture, price perception and mall promotions on Indian consumers' commitment towards malls
[ "India", "Culture", "Price perceptions", "Malls", "Retailer promotions" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: The overall Indian retail sector is projected to grow from $672bn in 2017 to $1.1 trillion in 2020 at a compound annual growth rate of 17.85 per cent (IBEF, 2018). The organised retail market is expected to reach $111.25bn in 2019 from $70.45bn in 2017 (IBEF, 2018). The robust growth of organised retail is attributed to increased consumer spending in both urban and semi-urban regions, investment by national and global players in retail infrastructure, and proliferation of supermarkets in non-metropolitan cities. While the share of organised retailing in developed economies like the USA, UK and Singapore is more than 80 per cent, in India, it is barely nineteen per cent (Dutta et al., 2016). The modern retail formats are concentrated in metropolitan cities and account for $62bn retail sales. However, spurred by recent government initiatives on allowing foreign direct investment in single and multi-brand retailing, organised retailing is showing signs of fast-track growth. This is augmented by e-tailing options in non-metropolitan cities where the presence of modern brick and mortar retail formats may be scarce (Dutta et al., 2016).\nLiterature review: The study addresses the nature of mall retailer promotions and their impact on sales and consumer purchase behaviour. Second, it examines the influence of cultural dimensions and price perceptions on retailer promotions and the relationship of these factors in explaining consumers' commitment towards the malls.\nResearch method: Measures\nData analysis and results: In this section, the data collected from the respondents were put through a sequence of exploratory and confirmatory factor analysis (CFA) (measurement model) using structural equation modelling. This helped in identifying the relevant factors for the study. Finally, the hypotheses were tested by analysing the structural model.\nFindings: The findings revealed that the cultural dimension of masculinity had a significant effect on consumers' perception of loyalty cards offered by the mall retailers (b=0.153, p<0.05), providing empirical support to H1c. Masculinity was not found to be a significant predictor of consumers' perception of discounts (H1a) and promotional offers (H1b). The effect of the cultural dimension of uncertainty avoidance on consumers' perception towards discounts (H2a), promotional offers (H2b) and loyalty cards (H2c) was not significant. The cultural value of long-term orientation was found to have a significant influence on perception of promotional offers (b=0.127, p<0.05) and loyalty cards (b=0.129, p<0.05), with the effect being stronger for the latter. The findings revealed that price-conscious shoppers are attracted towards all the three types of retailer promotions, i.e., discounts offered by mall retailers (b=0.195, p<0.05), promotional offers (b=0.158, p<0.05) and loyalty cards (b=0.171, p<0.05). Thus, H4a-H4c were supported. Discounts were found to be the most effective promotional strategy in attracting price-conscious shoppers, followed by loyalty cards and promotional offers, respectively. The effect of value consciousness on consumers' perception towards discounts (H5a), promotional offers (H5b) and loyalty cards (H5c) provided by the mall retailers was not significant. The same can be attributed to the fact that value conscious shoppers seek a balance between quality and price. Many a time prices act as a cue for quality; hence, lower prices and promotional offers were associated with low quality. Promotional strategies without promising quality assurance would not attract value conscious shoppers. The retailers must focus on highlighting the quality aspects of the product along with their promotional strategies. Coupon prone consumers were found to have a significant positive perception towards promotional offers (b=0.156, p<0.05), providing support to H6b. However, coupon prone consumers did not have a significant perception of discounts (H6a) and loyalty cards (H6c) offered by mall retailers. Since coupon consumers seek transactional utility, the process of redemption of promotional offers was attractive to this segment. Perception of discounts and promotional offers provided by mall retailers had a significant influence on commitment towards the mall, supporting H7 and H8, respectively. Discounts offered by the retailers had a greater effect on commitment (b=0.148, p<0.05) than promotional offers (b=0.128, p<0.05). Discounts provide instant benefits and hence enhance acquisition utility, thus having stronger effects. However, the effect of loyalty cards on commitment towards mall was not significant (H9). The effect size for significant relationships was calculated using Cohen's f2 coefficient to understand the practical significance of the relationships (Khalilzadeh and Tasci, 2017). The Cohen's f2 coefficient ranged between 0.01 and 0.024 for all the significant relationships, indicating small effects (0.01<f2<0.06; Khalilzadeh and Tasci, 2017). The results of the hypothesized relationships between variable is shown in Figure 2.\nDiscussion: The research makes two contributions to the existing research on mall retailing. First, it examines the influence of promotions used by retailers located in malls and its impact on consumers' commitment for the mall. It focusses on both national and local retail stores that operate in malls. Thus, it looks in importance of promotional strategies used by different retail stores located in malls in drawing traffic to the mall. Second, it aims at extending the existing literature on culture and sales promotions by examining the effect of Hofstede's cultural dimensions on different sales promotion techniques offered by the retailers. Although existing research has pointed out that cultural nuances could lead individuals to select different sales promotions across cultures (McNeill, 2006), there is a paucity of studies which have tested the same empirically. Research on culture and retail has emphasised the importance of culture on consumer's price perception and buying behaviour (Wang et al., 2010; Meng, 2011, Khare, Achtani and Khattar, 2014; Khare, Pandey and Bhardwaj, 2014). The current research adds to mall retailing research by analysing the influence of uncertainty avoidance, masculinity and long-term orientation on consumers' behaviour towards mall retailer promotions like price discounts, loyalty card and promotional offers. The findings are in tandem with research on collectivist cultures that suggest the importance of cultural values on behaviour to avoid risk and plan for future (Yau, 1988; McNeill, 2006; Jo and Sarigollu, 2007; Meng, 2011). Discounts and promotional offers symbolise thrift and saving money for future and are related to uncertainty avoidance and long-term orientation. The findings posit that consumers have positive perception towards retailer promotions as it enables them to get financial benefits. The influence of masculinity on consumer's perception towards loyalty cards could be attributed to its relationship with wealth, well-being and status in society as a symbol of success (Lindridge et al., 2004; Dorotic et al., 2012). Promotions combined utilitarian, hedonic and symbolic benefits (Stathopoulou and Balabanis, 2016) which enabled consumers to purchase premium brands at reasonable prices that otherwise may be difficult to acquire. It could be related with individual's desire to improve social image (Markus and Kitayama, 1991; Lindridge et al., 2014).\nManagerial implications: The findings suggest the importance of three retailer promotions, namely, discounts, offers and loyalty cards, and provide valuable insights for both researchers and mall practitioners about consumers' perception towards promotion. The influence of cultural values of masculinity and long-term orientation on promotions and loyalty cards was established. However, loyalty cards do not influence commitment towards the mall. This may be attributed to consumers' perceived fewer short-term benefits accruing from loyalty cards. The influence of masculinity on positive perception towards retailer promotions could be ascribed to importance given to wealth and status in society as a symbol of success (Lindridge et al., 2004; Banerjee, 2008; Khare, 2011a). Thus, it becomes imperative to link promotions with status, success and well-being. Mall retailers while designing promotional strategies should try to communicate utilitarian, hedonic and status benefits. Promotions should have the dual strategy of not only delivering value but also conveying a premium image. Retailer promotions should enhance premium and status perceptions so that consumers' feel that quality brands can be acquired without diluting global image. Promotions should help in targeting price-conscious consumers by not only creating a premium positioning, not only for retailers but also for the malls. Furthermore, discounts and offers can be related to contests, competitions and consumer participation.\nLimitations and scope for future research: The research combines cultural values and price sensitivity dimensions to study retailer promotions and commitment towards malls in India. It is focussed only on mall retailers and does not include retailers operating in unplanned business districts like central and secondary business districts. While the findings add to existing literature on mall retailing, it cannot be generalised to other retail formats. The study was limited by the sample selection which was predominantly restricted to metropolitan cities and may not explain consumers' behaviour towards retailer promotions in smaller cities in India. Since nature and size of malls vary across different parts of the country, comparison across various regions in India can provide valuable insights. A cross-country analysis regarding consumer behaviour towards retailer promotions may be undertaken to understand difference among consumers' across different countries. The significant relationships in the study displayed smaller effect size which suggests that other factors must be considered by future research which may better explain the choice of promotions and commitment. Nature of promotions (utilitarian, hedonic and symbolic value) and consumers' receptivity towards them differs across retail formats and cities. These aspects may be examined with respect to cultural values. Role of demographic factors like age, income and gender on retailer promotions and patronage was not discussed. Future research can be directed to understand the attitude towards promotions by focusing on different age and income groups and consumers residing in metropolitan and non-metropolitan cities. Relationship factors, service staff behaviour and type of retailer can be analysed with respect to promotional strategies. Influence of personality factors like innovativeness, self-congruence and susceptibility to interpersonal influence on retailer promotions can be considered. Future research can also look at the impact of promotions by mall operators.\n", "role": "user" }, { "content": "Purpose: The exponential growth of organised retail has led to competition among mall retailers with the use of promotions to increase traffic to the stores. The footfall in the malls is dependent on the sales generated by various retail stores located in the malls. The current research analyses the role of promotions used by the retailers located in Indian malls in improving consumers' commitment towards the mall. The purpose of this paper is to examine the influence of culture, personality traits like value consciousness, price consciousness and coupon proneness, and promotions used by retailers in malls on consumers' commitment towards the malls.\nDesign/methodology/approach: Eight malls from six cities were selected by popularity and footfall. Mall-intercept technique along with systematic sampling was used to collect data from 453 mall shoppers using a self-administered questionnaire. Structural equation modelling was used to analyse the data.\nFindings: Retailer promotions were categorised under discounts, promotional offers and loyalty cards. The findings suggest that cultural values of long-term orientation and masculinity have an impact on consumers' perceptions towards discounts, promotional offers and loyalty programmes. Different promotional strategies had varied responses from consumers.\nPractical implications: Mall retailers can use the findings to design promotions according to the cultural values of masculinity and long-term orientation. Loyalty programmes can symbolise status and long-term relationship with retailer. Commitment towards retailers and consequently malls could be enhanced through discounts and promotional offers. Different promotional strategies can be used to target price-conscious consumers and increase footfall in the stores. The linkages between cultural dimensions and promotional techniques would be helpful in targeting different consumer groups by designing promotions which are in line with cultural values.\nOriginality/value: The research extends the existing literature on mall retailing by analysing the importance of cultural dimensions on sales promotions strategies used by retailers in malls. The study establishes that perceptions towards retailer promotions differ across cultures. Different promotions offered by retailers generate varied response from consumers across different cultures which would influence their commitment towards malls.\n", "role": "assistant" } ]
Modeling of an IoT-enabled supply chain for perishable food with two-echelon supply hubs
[ "Internet of Things", "Conceptual model", "Supply chain for perishable food", "Two-echelon supply hubs" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Nomenclature\n1. Introduction: Millions of people are increasing their spending on fresh, delicious but perishable and expensive goods like fruits, seafood, etc. According to the AQSIQ Statistics Report, the demand for salmon (which is a kind of popular seafood) imported to China has increased from 74 thousand tons in 2003 to 283.8 thousand tons in 2013 (www.aqsiq.gov.cn/zjsj/tjxx/tjsj/). The huge demand for perishable food has stimulated the development of many related profit-making industries greatly, e.g., the cultivating, processing, and transportation of the perishable products. These activities are connected throughout the operations of the supply chain for perishable food (SCPF).\n2. Literature review: Three streams of literature are relevant to this research, which include the IoT-based real-time data capturing, the SCPF, and the theory of the SHIPs.\n3. The industrial agglomeration phenomenon in SCPF: In the perishable food industry, an interesting phenomenon is that the suppliers and the retailers of SCPF often gather together to form their industrial parks or markets, respectively. From the perspective of industrial organization theory, this phenomenon is called industrial agglomeration. The agglomeration force creates attraction to clusters that maximizes the tendency to co-locate with customers and intermediate good supplies (Baldwin et al., 2012). Generally, industrial agglomeration is considered as the inevitable outcome of economic development and now largely appears in all kinds of industries with the form of industrial parks. As a result, the development of industrial parks has become a critical strategy to promote economic development in many countries (Zhe et al., 2015).\n4. Modeling of SCPF-2ESH: In this section, a novel PFSC model is introduced motivated by the trend of development in the present perishable food industry. The logical and information flow of the supply chain is redesigned, where the supply hubs play a dominating role in the supply chain.\n5. Overall architecture of the deployment of IoT in SCPF-2ESH: The real-time traceability and visibility of supply chain processes play a critical role in improving the performance of SCPF. This section aims to apply the concept of IoT to the SCPF field and to build up a real-time information-capturing and integration reference architecture for SCPF-2ESH. By implementing the infrastructure of IoT, an overall architecture for the real-time information capturing and integration of the IoT-enabled SCPF is designed in Figure 3. Under this architecture, the real-time tracking and tracing of the supply chain can be achieved. Then, the execution processes of the whole supply chain can be dynamically monitored. The sensor networks are used to measure the dynamic parameters such as environment (e.g. temperature, pressure, humidity, etc.), movement (e.g. velocity, vibration, etc.), and the real-time status of manufacturing things. The architecture provides the supply hubs (also known as the dominators of SCPF) with real-time information to drive the operations of SCPF.\n6. The workflow of order assignment in an IoT-enabled SCPF-2ESH: The fulfillment of real-time information capturing and sharing can effectively mitigate the bullwhip effect generated during the supply chain operations and can reduce information asymmetry among nodes of supply chain. Unlike the relatively stable partnerships in the upstream supply hubs, different processors may participate in the operations of supply chain in different time period of the order. Based on the IoT technologies, the downstream supply hubs can gather information from the processors and take orders in place of the processors. Retailers who have demand for products should register and publish orders to the downstream hubs. Their orders should specify the required type, amount of products, etc. Different orders are quickly aggregated or decomposed on the information services platform, and the related information will be shared between the two supply hubs. Once receiving the shared information of the orders, the upstream supply hub will organize suppliers to prepare the necessary raw materials to fulfill the orders, as required by the downstream supply hubs. To realize this workflow, a processors' satisfaction model is presented as follows. The notations of the model are defined in \"Nomenclature.\"\n7. Case study: In this section, a case demonstration of the SCPF-2ESH model is analyzed to illustrate how the proposed model was introduced to a company to upgrade its strategy of supply chain management. In addition, lessons learnt from the case are drawn in a normative manner for the effective implementation of SCPF by other companies in similar situations.\n8. Conclusions and future work: Currently, the perishable food industry need to establish an efficient and quick-respond supply chain with low cost to meet the challenge of keeping the perishable food fresh even in cross-regional cases. A conceptual model was established for the perishable food supply chain with 2ESH. In order to realize the real-time information sharing among different roles in a supply chain, an easy-to-deploy and simple-to-use architecture for SCPF is developed by applying the IoT technologies. In our research, the IoT-enabled SCPF has two supply hubs which dominate the operations in the supply chain. The two hubs respond to their nearby nodes of the supply chain. They can provide public warehousing and logistic services, and can also be used for online operational monitoring and emergency treatment. The performance of supply chain has improved when implementing the proposed model, as is demonstrated by the case study.\n", "role": "user" }, { "content": "Purpose: The huge demand for fresh goods has stimulated lots of research on the perishable food supply chain. The characteristics of perishable food and the cross-regional transportation have brought many challenges to the operation models of perishable food supply chain. The purpose of this paper is to address these challenges based on the real-time data acquired by the Internet of Things (IoT) devices.\nDesign/methodology/approach: IoT and the modeling of the Supply Hub in Industrial Parks were adopted in the perishable food supply chain.\nFindings: A conceptual model was established for the IoT-enabled perishable food supply chain with two-echelon supply hubs. The performance of supply chain has improved when implementing the proposed model, as is demonstrated by a case study.\nOriginality/value: By our model, the supply hubs which act as the dominators of the supply chain can respond to the real-time information captured from the operation processes of an IoT-enabled supply chain, thus to provide public warehousing and logistic services.\n", "role": "assistant" } ]
The transformative service paradox: the dilemma of wellbeing trade-offs
[ "Transformative service paradox", "Wellbeing", "Dilemma", "Trade-offs" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: A transformative service is one \"...that centers on creating uplifting changes and improvements in the well-being of both individuals and communities\" (Ostrom et al., 2010, p. 12) and which are found in multiple contexts, such as healthcare, tourism and finance, and even service industries which may not appear altruistically inclined at first (Rosenbaum, 2015). The importance of transformative services is underlined by issues such as the declining life expectancy in countries such as the United States (American Academy of Family Physicians, 2018) and United Kingdom (Institute and Faculty of Actuaries, 2019). Alarmingly, this has led to forecasts of future generations being worse off than their parents in crucial wellbeing areas (Grattan, 2014). In response to these issues, service scholars urge greater understanding of how services can assist in improving individual and societal wellbeing, which has evolved into the Transformative Service Research (TSR) agenda.Wellbeing is thought to be a multi-faceted concept with physical, emotional, financial and other aspects which are a focus of services (Pham et al., 2019; McColl-Kennedy, Hogan, Witell and Synder, 2017; Guyader et al., 2019). Transformative service research has consistently shown that services can improve the wellbeing of consumers and society, though to date TSR has focused on wellbeing as a whole rather than from a multidimensional angle (Kuppelwieser and Finsterwalder, 2016). This represents a new direction for transformative service research to take, given that scholars have called for a greater balance of research exploring both the positive and negative impacts of services (Caic et al., 2018;Ple and Chumpitaz Caceres, 2010). This research sits at the nexus of positive and negative service impacts, providing new insights into how transformative services can affect wellbeing.The current research focuses on how consumers often experience a dilemma whereby they must trade off one dimension of wellbeing against another. For example, a dentist surgery provides an important service that ultimately improves consumer physical wellbeing, but at the same time, this service results in a significant negative impact to the consumer due to the size of the bill, subsequently detracting from their financial wellbeing (c.f. Hill et al., 2013; Kelly et al., 2005). Thus, transformative service wellbeing goals can often unknowingly compete, resulting in a predicament for consumers as to which wellbeing goal should take priority. To theoretically and empirically explain these dilemmas experienced by consumers, this article theoretically proposes and empirically examines the transformative service paradox.In service literature, many scholars have contributed to explaining consumer trade-offs in the evaluation of service attributes, such as price, level of quality and level of personal service (e.g. Ostrom; Iacobucci, 1995), which underpin central outcomes such as value (a trade-off of costs and benefits; Kleijnen et al., 2007). Yet, while there is understanding that consumers negotiate with themselves as to what trade-offs with service attributes are made in the lead-up to a purchase decision (a \"self-dilemma\"), little of this thinking has transferred to transformative service research. The first aim of this paper is to understand how individuals trade off different dimensions of wellbeing with themselves.Another dilemma experienced by consumers is performing behaviors which may have competing benefits for themselves and others (a \"social dilemma\"). For example, during a cold winter consumers may choose to use a large amount of electricity to heat their home, which increases their physical wellbeing. However, this is at the expense of others as placing strain on the grid can lead to increased investment (and higher electricity bills), and environmental wellbeing suffers as more resources are consumed. Essentially, the social dilemma represents a mind-set problem whereby individuals hold a zero-sum logic: if they believe that for one individual to \"win\" another has to \"lose,\" then they will seek self-benefit first, rather than focusing on mutual benefit. Literature explains that consumers will behave differently based upon the beneficiary of their actions (Duclos and Barasch, 2014; Ye et al., 2015). However, little is known about how consumers behave when presented with social dilemmas relating to transformative services. The second aim of this paper is therefore to contribute to service knowledge related to the trade-off consumers may undertake for the wellbeing of \"the self\" vs. others.This paper therefore seeks to address two research questions: RQ1: \"How do individuals trade-off different dimensions of individual wellbeing?\" (Self vs. Self); and RQ2: \"How do individuals trade-off individual and others wellbeing?\" (Self vs. Other). To answer these two research questions, the current paper applied a two-study mixed method approach using in-depth household interviews and a behavioral economics laboratory experiment in which the transformative service paradox is first explored and then confirmed within an electricity service context.Electricity usage (or reduction of usage) has been identified as an important area for TSR with calls for further research in this area (Guyader et al., 2019). Electricity services form the foundation of modern society and underpin the reliable and affordable supply of other transformative services such as health services, access to education and financial services. Electricity services affect societal and individual wellbeing by improving quality of life (World Energy Council, 2016) and can therefore be considered a transformative service. With global consumption of energy on the rise (Energy Information Administration, 2019) as consumers increasingly desire physical comfort (through appliances such as air-conditioning) at the expense of society's ability to provide the energy (environmental and resource issues), we see a \"transformative service paradox\" (TSP) where consumers need to trade-off their wellbeing with that of society - resulting in a social dilemma. Research has noted that encouraging reduction of electricity usage to improve financial wellbeing can be counter-intuitive for some market segments as this can detract away from physical wellbeing (Waitt et al., 2016). Thus, given electricity is noted as a transformative service requiring exploration and literature and as a context that involves trade-offs and social dilemmas and is thus an appropriate context for this research. The use of electricity services extends the reach of TSR beyond the typical health contexts and broadens the understanding of how services can create uplifting change to wellbeing.The remainder of this article is structured as follows. First, a review of relevant literature is presented including transformative service research, trade-offs and social dilemmas which are then synthesized into the theorization of the transformative service paradox. Next, an overview of the method is presented, followed by the presentation of Study 1 and Study 2. To conclude, the theoretical and practical implications of the paper are discussed, as well as an outline of the papers limitations and future directions for research.\nLiterature review: In this literature review, the current state of play for transformative service research is outlined with particular attention to the contexts and conceptualizations of wellbeing used in past transformative service research leading to the first research question. Then the literature on wellbeing trade-offs is examined with particular emphasis on evidence from the field of economics and then a discussion on social dilemmas leading to the second research question. Collectively, these trade-offs theoretically guide the concept of the transformative service paradox.Transformative service research - a state of play\nOverview of research design: This paper reports the results of two studies to address the two research questions. The first is a qualitative study that utilized in-depth group interviews with 118 individuals from 45 households to identify the relevant wellbeing dimensions that underpin the use of electricity services and to examine the reasons for wellbeing trade-offs in that context. The second is a behavioral economics laboratory experiment which confirms the transformative service paradox and the reasons for the paradox identified in study 1. The studies are both situated within the transformative service setting of electricity and are summarized in Table 2.\nStudy 1: Study 1 method\nHypotheses development: Consistent with the aims of the research and the model derived from the findings of Study 1, three hypotheses for testing in study two were developed to confirm the transformative service research paradox. These hypotheses are underpinned by social dilemma theory, which posits that individuals are motivated to perform behaviors based upon how much they value outcomes for self in contrast to outcomes for others (see also Table 5).H1 Self-dilemma (self vs. self)\nStudy 2: The purpose of Study 2 was to confirm the transformative service research paradox identified in Study 1. This research focused on actual behaviors rather than stated ones; to achieve this laboratory experimentation was conducted. Laboratory experiments have relatively high rates of replicability, mainly due to their use of substantial financial incentives, avoiding the use of deception, and methodological transparency (Camere et al., 2016). By only having variation of conditions (independent variables) potential confounds are controlled to mitigate their interference with any effect on relevant measures (dependent variables) (Elson and Quandt, 2016).This study follows standard experimental economics methodology (Croson, 2005) and involves incentive-compatible tasks, i.e. where their final lump sum payment was determined on their performance throughout the experiment. As such financial wellbeing was one of the key indicators of wellbeing for Study 2. This is appropriate for several reasons. First, a key finding from Study 1 was that financial wellbeing was a key motivating factor for consumers. Second, as indicated in sustainable consumption research, financially incentivizing consumers to reduce energy consumption is more effective as it allows for the dual benefit of saving money in the short term whilst saving the environment in the long term (Mulcahy et al., 2018b; Russell-Bennett et al., 2018), a key premise for Study 2. Further, calls have been made for a greater understanding of how to enhance consumer financial wellbeing from transformative scholars (Bruggen et al., 2017; Losada-Otalora; Alkire, 2019). Thus, based upon these factors, it was determined that financial wellbeing was an appropriate dimension for focus in Study 2.Procedure\nDiscussion: This research has identified that wellbeing trade-offs occur in transformative services and that a paradox results whereby consumers forego a wellbeing dimension (in this case financial) to achieve another wellbeing dimension (physical) in the short-term despite seeking to achieve the foregone dimension in the long-term. Consumers are also willing to forego societal wellbeing for individual wellbeing despite individual wellbeing being dependent on societal wellbeing. Study 1 demonstrated that consumers were aware of the trade-offs and the paradox however the temporal state of the wellbeing and the beneficiary were powerful drivers of the trade-off. Further, consumers will often weigh up the benefits for themselves ahead of others before engaging in a wellbeing behavior. These insights into the transformative service paradox advance current understanding of consumers' use of services and performance of behaviors which impact wellbeing.Theoretical implications\nConclusion: This research has addressed the important but often overlooked negative unintended consequences of transformative services on consumer wellbeing. The current work has proposed the transformative service paradox framework and aims to stimulate greater discussion and theorizing regarding unintended negative wellbeing consequences of transformative services and services in general. As theorized and empirically demonstrated in this paper, it is possible for positive and negative impacts on different dimensions of wellbeing to occur concurrently. This leads to several implications for theory and practice in considering the multidimensional, temporal, and social (self and other) nature of wellbeing. Overall, this study contributes new insight into the growing area of TSR, providing an important foundation for future scholarly investigations and considerations for transformative service practitioners.\n", "role": "user" }, { "content": "Purpose: A transformative service aims to improve wellbeing; however, current approaches have an implicit assumption that all wellbeing dimensions are equal and more dimensions led to higher wellbeing. The purpose of this paper is to present evidence for a new framework that identifies the paradox of competing wellbeing dimensions for both the individual and others in society - the transformative service paradox (TSP).\nDesign/methodology/approach: Data is drawn from a mixed-method approach using qualitative (interviews) and quantitative data (lab experiment) in an electricity service context. The first study involves 45 household interviews (n = 118) and deals with the nature of trade-offs at the individual level to establish the concept of the TSP. The second study uses a behavioral economics laboratory experiment (n = 110) to test the self vs. other nature of the trade-off in day-to-day use of electricity.\nFindings: The interviews and experiment identified that temporal (now vs. future) and beneficiary-level factors explain why individuals make wellbeing trade-offs for the transformative service of electricity. The laboratory experiment showed that when the future implication of the trade-off is made salient, consumers are more willing to forego physical wellbeing for environmental wellbeing, whereas when the \"now\" implication is more salient consumers forego financial wellbeing for physical wellbeing.\nOriginality/value: This research introduces the term \"Transformative Service Paradox\" and identifies two factors that explain why consumers make wellbeing trade-offs at the individual level and at the societal level; temporal (now vs. future) and wellbeing beneficiary.\n", "role": "assistant" } ]
Differentiating the effects of entrepreneurs' intelligence and educational attainment on venture outcomes
[ "Cognition", "Human capital" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Human capital has long been considered an important asset that can lead to personal and organizational success (Becker, 1975; Pfeffer, 1994; Ployhart and Moliterno, 2011). Human capital can be defined as the skills and knowledge that individuals acquire through investments in schooling, on-the-job training and other types of experience (Becker, 1975). Human capital has been shown to be an important resource for success in entrepreneurial firms (e.g. Cooper et al., 1994; Florin et al., 2003; Sexton and Upton, 1985; Unger et al., 2011).\nHypotheses: In this section, hypotheses are developed regarding how entrepreneurs' GMA and educational attainment will affect venture performance (see Figure 1 for a summary of these relationships) and venture survival. The assumption made is that the performance of a venture for someone who is self-employed can be measured by the profits/earnings generated by the venture (Rauch et al., 2000; Unger et al., 2011). These profits can be reinvested back into the business venture or can be distributed to the entrepreneur in two ways. The profits will either be taken out of the business in the form of the salary/earnings of the entrepreneur or be reported as net income of the venture at the end of the year (in which case the self-employed entrepreneur could receive a financial payout from the company). If the entrepreneur chooses to invest money back into the business venture, this would presumably be due to an expectation that the venture will grow and become more profitable, leading to higher earnings in the future. Therefore, a key performance outcome variable measured in this study is the total earnings (in salary and profits) that the self-employed entrepreneur receives from the business venture, which is a direct measure of the performance of the venture (Rauch et al., 2000; Unger et al., 2011). A second important measure of an entrepreneur's success discussed below is the survival of the venture (Brinckmann et al., 2010; Headd and Kirchoff, 2009).\nMethod: Sample\nResults: The correlations among study variables are presented in Table I. Similar to prior research (Berry et al., 2006; Judge et al., 2010), GMA and educational attainment were found to be highly correlated (r=0.63). Individuals with higher educational attainment were less likely to have self-employment experience (r=-0.17). Self-employed entrepreneurs in personal services industries had the lowest venture performance (r=-0.28) and had shorter tenure in their ventures (r=-0.19), while entrepreneurs in the professional services industries had the highest income from their ventures (r=0.23). In addition, entrepreneurs with higher GMA were more likely to begin businesses in professional service industries (r=0.32) and less likely to begin businesses in personal service industries (r=-0.12). Consistent with expectations, males were much more likely than females to work in the construction industry (r=0.43), whereas females were much more likely than males to begin businesses in the personal services industry (r=0.45).\nDiscussion: For this sample of self-employed entrepreneurs, when both GMA and educational attainment were examined simultaneously, GMA influenced venture performance directly and indirectly via educational attainment. This indicates the importance of GMA as a context-generic human capital variable for entrepreneurs and supports the view that higher GMA should be viewed as an important resource to enable entrepreneurs to succeed in their ventures (Frese et al., 2007; Hartog et al., 2010). In addition, this study also demonstrates that entrepreneurs with higher GMA were able to obtain more education in subsequent years, and GMA also had an indirect, positive influence on venture performance through this additional educational attainment. This provides evidence that GMA is likely a primary antecedent or source of entrepreneurs' human capital acquisition (Ployhart and Moliterno, 2011; Rauch and Rijsdijk, 2013). Therefore, GMA should be more central to the human capital discussion in the entrepreneurship literature and theoretical models that include GMA should be developed.\nConclusion: This study demonstrates the importance of the GMA for entrepreneurs and their venture performance and survival. While there is substantial and well-developed research on the importance of GMA across most occupations and jobs (Hunter, 1980; Schmidt and Hunter, 1998), hopefully this study will encourage additional research examining this important context-generic human capital resource, the interaction of GMA with other context-specific human capital resources, as well as its impact on entrepreneurial outcomes.\n", "role": "user" }, { "content": "Purpose: Intelligence or general mental ability (GMA) is a strong predictor of job performance across most occupations, and educational attainment has been shown to be a predictor of entrepreneurial outcomes. However, there has been little research examining the simultaneous effects of entrepreneurs' GMA and educational attainment on their venture outcomes. The purpose of this paper is to examine the impact of these human capital resources on venture performance and survival.\nDesign/methodology/approach: Using a sample of 234 self-employed entrepreneurs from a longitudinal database, regression analysis was employed to examine the predictors of venture performance. A hazard model was utilized to assess venture survival.\nFindings: Entrepreneurs' intelligence influenced venture performance directly and indirectly via educational attainment. Entrepreneurs with higher GMA were subsequently able to obtain more education, and GMA had an indirect, positive influence on venture performance through this additional educational attainment. Findings also demonstrated an inverted-U, curvilinear effect on venture survival for GMA and educational attainment. This indicates that both intelligence and educational attainment should be considered when examining how likely entrepreneurs are to persist or survive in their ventures.\nOriginality/value: Entrepreneurs with higher GMA had ventures that performed better and obtained more education, which influenced venture survival. These findings suggest that entrepreneurs' intelligence is likely to be an important predictor of venture outcomes, as well as a source of entrepreneurs' human capital acquisition. Therefore, GMA should have a more central role in the human capital discussion within the entrepreneurship literature.\n", "role": "assistant" } ]
Mean reversion in per capita GDP of Asian countries: Evidence from a nonlinear panel unit root test
[ "Asian countries", "Heterogeneous panel", "Mean reversion", "Nonlinearity", "Panel unit root" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1 Introduction: Presence or absence of unit root or mean reversion in real output is an intensively researched topic in economics, since the results have far reaching implications in macroeconomic policy making and forecasting purposes. The seminal work in this area is Nelson and Plosser (1982), which found that the US historical time series are nonstationary stochastic process with no tendency to return to the trend line. Based on the empirical evidence they observed that:\n2 Methodology: Let y\n3 Data and variables: We used annual PCGDP data of 17 Asian countries (namely Bangladesh, Cambodia, China, Hong Kong, India, Indonesia, Japan, Malaysia, Myanmar, Pakistan, Philippines, Singapore, South Korea, Sri Lanka, Taiwan, Thailand and Vietnam) over the period 1950-2009. The data was taken from The Conference Board Total Economy Database accessed from www.conference-board.org/data/economydatabase. In order to analyze issues further we sub-grouped our panel into South Asia (India, Pakistan, Bangladesh and Sri-lanka), East Asia (Indonesia, South Korea, Malaysia, Myanmar, Philippines, Thailand, Vietnam, China and Cambodia) and high income Asian countries (Japan, Hong Kong, Singapore, South Korea, and Taiwan) and analyzed each sub-panel.\n4 Results: The results of Ucar and Omaga (2009) panel nonlinear unit root test and of Im et al. (2003) linear panel unit root test are given in Table I followed by the sub-panel results in Table II.\n5 Conclusions: Since the presence of unit root (i.e. absence of mean diversion) in income is inconsistent with the notion that business cycles are stationary phenomena around deterministic trend, this paper analyzed the mean reversion properties of PCGDP of 17 Asian countries for the period 1950-2009 in panel framework. In doing so, we utilized a recently developed nonlinear panel unit root test proposed in Ucar and Omaga (2009). The choice of this test in our application relies on the economic theory that provides reasons for nonlinear DGP in economic time series due to frictions/distortions in the economy. We carried out analysis for the full panel of 17 Asian countries and the sub-panels of South Asia, East Asia and the high income Asian countries. In all the cases, we have compared the nonlinear panel unit root results with the linear panel unit root tests Im et al. (2003) and generated p-values for each test with 10,000 bootstrap replications.\n", "role": "user" }, { "content": "Purpose: This study aims to examine the stationarity characteristics of per capita GDP of 17 Asian countries and subpanels for South Asia, East Asia, and high income Asian countries in nonlinear framework.\nDesign/methodology/approach: The authors employed a recently developed nonlinear panel unit root test suggested by Ucar and Omaga in PESTAR framework for full panel and the subpanels.\nFindings: The results indicate that per capita GDP for the full panel of Asian countries and panel of South Asian countries are linear nonstationary, whereas for the panel of East Asia and high income developed countries have a nonlinear data generating process and are stationary.\nOriginality/value: The use of newly developed nonlinear panel unit root test for Asian countries is the main contribution of the study. In that aspect, this is the first study to employ such a test in this area.\n", "role": "assistant" } ]
Design guidelines for female Muslim prayer facilities in public buildings
[ "Architecture", "Customer requirements", "Hygiene", "Facility layout", "Shopping centers", "Prayer facility" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Practicing Muslims pray five times daily. Each prayer has a window of time for its performance. To accommodate this function, prayer facilities commonly exist within public buildings in regions of the world with dominantly Muslim populations. Unfortunately, references in architecture design standards do not have information on how to design these facilities. Rather, they address the design of a mosque as a separate building and not the design of a prayer facility that is part of another building (Neufert and Neufert, 2012; Mosque Development Committee, 2013). Hence, they commonly cover aspects such as the types of mosque plans, types of minarets and other typical mosque components. Yet, these aspects are largely irrelevant to the design of a prayer facility that is accommodated in a shopping mall, airport, office, or any other public building. Consequently, there tend to be mistakes in the design of these facilities as designers have no guidelines to refer to. To cover this gap, the author has published several design guidelines that address this problem. One of the guidelines is for an element that is common to both mosques and prayer facilities, namely, that is the ablution space (Mokhtar, 2003 and Mokhtar, 2006). Further work addresses the ergonomics of designing the ablution unit (Nurtjahy et al., 2013). Mokhtar (2009) addresses many issues related to the design of prayer facilities in public buildings. He also covered the special design issues that apply to such facilities in airports (Mokhtar, 2010). The occasional problem of severe congestion that occur in some prayer facilities was also studied by Mokhtar and a design solution was proposed (Mokhtar, 2017). Al-Mansoor (2018) developed an accessibility checklist to help accommodate the needs of the physically challenged when they use mosques and prayer facilities. Kakhki et al. (2017) proposed a formula to calculate the capacity of prayer facilities in train stations.\n2. The guidelines: The guidelines suggested in this paper are based on the following:\n3. Comparing two designs: Figure 14(A) shows a plan for an existing ablution facility in a shopping mall located near Dubai in the UAE. It represents how such a facility is commonly designed in terms of the design program, zoning and circulation. Figure 14(B) is a design modification proposed by the author for the same facility after applying several of the above guidelines. The purpose of using a retrofitting case is to contrast between a plan with some errors and a plan that follows the guidelines to a great extent. Certainly, a newly designed facility following the guidelines could be designed much better than Plan (B). Some of the mentioned guidelines are also too detailed to show in this plan.\n4. Conclusion: Muslim female users have the right to enjoy well-designed prayer facilities. Unfortunately, the lack of guidelines makes it difficult for many designers to provide safe, hygienic and comfortable female prayer facilities. This paper introduces some design guidelines to overcome this problem. These cover a range of issues from locating the facility to the design of the ablution unit. A sample design example illustrates how the guidelines achieve a safer, hygienic and more comfortable design.\n", "role": "user" }, { "content": "Purpose: Practicing Muslims pray five times a day during specified periods. Hence, public buildings in many Muslim-majority countries tend to have prayer facilities. These facilities are typically gender-segregated. Unfortunately, for a long time, the main focus for the design of these facilities has been on the male facilities. As a result, many female users suffer from using facilities that are not safe, hygienic or comfortable. Part of the reason is the lack of guidelines that help designers provide a facility that satisfies the needs of female users. This paper aims to address this problem with a focus on prayer facilities that are in public buildings and not in mosques.\nDesign/methodology/approach: The paper offers design guidelines that are based on interviews with female users, analysis of well-designed female prayer facilities, observations of users' behavior, feedback from users on previously developed guidelines that are shared with male prayer facilities, and relevant religious rulings.\nFindings: The guidelines address the location of the facility, the zoning and circulation and the design considerations for the prayer space, as well as the reception area and the ablution space. It also recommends a design for an ablution unit that better fits female users. Finally, the paper analyzes an existing design for a female prayer facility in a shopping mall and shows how a change in the design based on this paper's recommended guidelines can achieve better safety, hygiene and comfort for female users.\nOriginality/value: The paper focus is new and should trigger the discussion on this aspect of facilities design where needed.\n", "role": "assistant" } ]
Does big data mean big knowledge? KM perspectives on big data and analytics
[ "Knowledge management", "Knowledge", "Analytics", "Big data", "Contextual knowledge" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Knowledge management (KM) is now approaching roughly the 25-year mark as a discipline. Although there are instances of academics and practitioners focusing on knowledge from earlier times, we can trace the beginnings of the KM discipline to the early 1990s, solidified with the publishing of the Knowledge Creating Company (Nonaka and Takeuchi, 1995) and firmly established with the publication of the inaugural issue of the Journal of Knowledge Management in 1997.\nWhat does this mean for KM?: One reason we called for this special issue was to find out what the rise and growing influence of big data/analytics means for KM. It has been an interesting special issue to work on. Frankly, although we received a number of good quality and relevant papers, many of these papers made no or very little mention of KM, and some, barely even a mention of knowledge. We worked diligently with the authors of some of the more promising papers to try and get them to explore the relationship of knowledge and KM with big data and/or analytics, which formed the main focus of their papers. It was not always easy as some of the authors seemed to be somewhat blinded by the bright lights of big data/analytics. As the interviews with Tom Davenport and Larry Prusak (Pauleen, 2017a) and Dave Snowden (Pauleen, 2017b) in this issue seem to attest, the KM discipline has its work cut out for it if it wants to remain relevant and indeed stay afloat in the big data deluge.\nOur take - a role for big data/analytics in KM: We would like to use this editorial opportunity to make an even stronger case - one that places knowledge and, by association, KM in a fundamental role in the management and application of big data/analytics.\nConclusion and implications: With this critical understanding of the function of knowledge in big data/analytic design, implementation and use, it becomes absolutely clear that KM should assume a leading organizational role in the management and governance of the use of big data/analytics in organizational settings. KM has the theoretical base and the practical experience to decide what data are needed for the organization to run efficiently and effectively, how those data should be analysed to provide information most useful for organizational processes and decision-making and how to develop knowledge-based feedback loops so that changes in data collection and analysis can be made in response to changes in the business environment, both internal and external.\n", "role": "user" }, { "content": "Purpose: This viewpoint study aims to make the case that the field of knowledge management (KM) must respond to the significant changes that big data/analytics is bringing to operationalizing the production of organizational data and information.\nDesign/methodology/approach: This study expresses the opinions of the guest editors of \"Does Big Data Mean Big Knowledge? Knowledge Management Perspectives on Big Data and Analytics\".\nFindings: A Big Data/Analytics-Knowledge Management (BDA-KM) model is proposed that illustrates the centrality of knowledge as the guiding principle in the use of big data/analytics in organizations.\nResearch limitations/implications: This is an opinion piece, and the proposed model still needs to be empirically verified.\nPractical implications: It is suggested that academics and practitioners in KM must be capable of controlling the application of big data/analytics, and calls for further research investigating how KM can conceptually and operationally use and integrate big data/analytics to foster organizational knowledge for better decision-making and organizational value creation.\nOriginality/value: The BDA-KM model is one of the early models placing knowledge as the primary consideration in the successful organizational use of big data/analytics.\n", "role": "assistant" } ]
Does poverty lead to crime? Evidence from the United States of America
[ "USA", "Poverty", "ARDL", "Property crime" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: According to US Census Bureau's 2015 report, 43.1m Americans lived in poverty. The poverty rate in California is 20.6 percent, which is the topmost of any state in the nation. The least poverty rates are found in New Hampshire, Vermont, Minnesota and Nebraska, which have near 8.7 and 9.1 percent of their population living in poverty[1]. Based on Gini coefficient, USA has the 45th highest level of income inequality among 176 countries (CIA, 2007). In 2010, homicide dropped from the top 15 causes of death in the USA for the first time in 45 years (Daily News, 2011). Property crimes, which comprise of burglary, larceny and motor vehicle theft posted drops between 3 and 9 percent[2] (Figure 1).\n2. Literature review: From the literature point of view, there are two different and complementary approaches to identify how inequality impact on poverty. First, from the sociological aspect, emotional feelings are considered one of the causes to become criminals. These theories describe poverty and inequality creates social anxiety, tension and strain which guide people to evolve into violent (Fajnzlber et al., 1998, 2002).\n3. Data and methodology: 3.1 Data\n4. Empirical result: 4.1 Unit root tests\n5. Conclusion and policy implications: Many scholars empirically addressed the consequences of poverty on crime but with no clear results and mainly for a mixture of two types of crime: property and violent crime. This study experiments the effect of relative poverty on crime in a different angle: property crime. After examining ARDL model our results suggest that poverty is cointegrated with property crime. Though our results are consistent with the previous research works of Madden and Chiu (1998), Fajnzylber et al. (2002), Lorenzo and Sandra (2008), it is still a debatable issue because some studies could not find a significant connection between poverty and crime (Brush, 2007; Mehanna, 2004; Choe, 2008; Magnus and Matz, 2008). Although property crime and unemployment are not significant in our analysis it is still a major finding related to Cohen and Felson (1979), who arguing that individuals stay at home to serve as a guardian when unemployment rate soaring, and unemployment reduce criminal activity which ultimately decreases the wealth of crime targets, (Cohen et al., 1980).\n", "role": "user" }, { "content": "Purpose: Economic hardship and crime is always a debatable issue in the political economy literature. Some authors define poverty leads to crime some are completely opposite. The purpose of this paper is to find out the impact of poverty on crime in the USA.\nDesign/methodology/approach: Using time series data of USA over the period from 1965 to 2016, this study applies autoregressive distributed lag approach to identify the effect of poverty on crime.\nFindings: The outcomes confirm a positive co-integrating relationship between poverty and property crime. It can be argued that poverty ultimately leads property crime in long run in the USA. However, unemployment and GDP exhibit neither long-run nor short-run relationship with property crime and they are not cointegrated for the calculated period.\nResearch limitations/implications: The subject of this paper helps to explain and analyze the nexus between poverty and crime in the USA.\nPractical implications: Government and policymakers should focus more on poverty rather than unemployment alone to control property crime.\nOriginality/value: This study attempts to identify the consequences of economic hardship and poverty on the crime in the advanced economy like USA.\n", "role": "assistant" } ]
Emotion regulation in service encounters: are customer displays real?
[ "Customer emotion regulation", "Emotion regulation matrix", "Emotional displays", "Service encounters" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Customers frequently experience positive or negative emotions during service interactions that are often followed by a choice to reveal, conceal or regulate these emotions. This decision is influenced by the circumstances in which the emotions arose and their expected consequences. Emotion regulation, which refers to people's attempts to influence their own emotions and observable emotional displays in a context-sensitive manner, is a hallmark of successful human functioning (Giuliani et al., 2008), as it helps avoid unwanted or intrusive emotional expressions that hinder communication and goal achievement.While employee emotion regulation, known as emotional labour, has been studied extensively (e.g. Ashforth and Humphrey, 1993; Brotheridge and Grandey, 2002; Groth et al., 2009), customer emotion regulation has received little attention. This is surprising, as service interactions are two-way, as customers' emotions and behaviours can influence employees' behaviour. Thus, customers may also need to regulate their emotions (Gabriel and Diefendorff, 2015; Medler-Liraz and Yagil, 2013). The few customer emotion regulation studies have used customers' surface or deep acting (e.g. Alnawas and Altarifi, 2016; Balaji et al., 2017; Medler-Liraz and Yagil, 2013) to examine their impacts on satisfaction, relationship quality, word-of-mouth and repurchase intentions.However, a matrix model suggested recently by Gross (2013) offers a broader approach and an alternative way to examine emotion regulation (by investigating emotion regulation behaviours). The model's four quadrant \"emotion regulation matrix\" allows for the playing up and downplaying of positive or negative emotions. Each quadrant is termed a \"facet\", while the behaviours (e.g. shouting, smiling) are termed customer emotion regulation behaviours (CEREBs). This study investigated emotion regulation through the expressive behaviours found in the four facets, adding to our understanding of the phenomenon of customers consciously displaying regulated emotions to employees, shifting the focus from employees to customers.It is reasonable to expect not all facets are applicable to employees (e.g. playing up negative emotions may not be), but all are likely relevant to customers. This is important, as identifying and recognising customer expressive behaviours helps employees meet customers' needs and build rapport (Gremler and Gwinner, 2008; Puccinelli et al., 2010). Thus, this study looked at what customers did, rather than the process by which they did it (i.e. using deep or surface acting strategies).The study also examined the factors that motivated customers' engagement in these facets. While some such motives have been discussed in psychology (e.g. Tamir, 2016), they have not been examined in customer-employee service encounter contexts. Unlike employees, customers are not constrained by display rules. As a result, the motives that drive customers to engage in emotion regulation may be different from those that drive employees. In addition, the study explored customers' motives to engage in emotion regulation and examined some situational variables that might influence CEREBs. While situational factors have been identified from an employee's viewpoint, there is little evidence about those that might affect customers' emotion regulation.An understanding of CEREBs across the four facets should provide greater information about the customer emotion regulation aspects that marketers can use to develop more effective service interaction strategies. Thus, the objectives of the present study were twofold:To explore the range of emotion regulation behaviours customers use during service encounters and identify the behaviours that reflect each facet of the emotion regulation matrix.To identify the motivational factors and service-related situational conditions that influence the likelihood of customers playing up and downplaying positive or negative emotions during service encounters.In sum, the overall research objective was to investigate CEREBs in face-to-face encounters and determine the factors that impact on customers' engagement in specific facets. The next section discusses emotion regulation and its application in service contexts so as to provide an understanding of its relevance to customers' service experiences. The approach used is then presented, as are the findings. Finally, some implications, limitations and future research opportunities are outlined.\n2. Literature review: This literature review addresses the development of emotion regulation research, including the core of this study (the emotion regulation matrix). Subsequently, factors that might influence emotion regulation are discussed. These factors have not been investigated previously with respect to the emotion regulation matrix. 2.1 Defining emotion regulation\n3. Method: The study used focus groups and the critical incident technique (CIT) to develop the key themes and core concepts associated with CEREBs, as Mazzarol et al. (2007) found this combination generated richer data. A qualitative approach was seen as appropriate to determine the expressive behaviours customers displayed in service encounters when they engaged in each facet in the emotion regulation matrix, as empirical evidence about such behaviours is scarce. Focus groups were used to discuss these ideas, so as to flesh out the behaviours associated with the emotion regulation matrix's facets. Focus groups are useful in improving participants' thought patterns by awakening memories about what they have done during past events. They may also lead other participants to recall similar events and suggest ideas that would not be raised in one-on-one interviews. As a result, considerable depth of information about participants' behaviours in each facet was accessed, as well as the motives and situations that drove them to engage in these facets.The CIT was also seen as appropriate for exploring customers' actions and behaviours in downplaying or playing up emotions, as it has been used to uncover behaviours that contribute to the success or failure of interactions (Bitner et al., 1994). The retrospective, self-report nature of the CIT method requires respondents to provide a detailed description of what happened during an incident and allows a longer time for respondents to reflect on and describe (write) their experience in the form of story. This enables respondents to clarify feelings and meanings they might be unable to articulate otherwise (Sharoff, 2008).While studies have demonstrated the CIT's usefulness as an independent method (Bitner et al., 1994; Gremler, 2004), it was used here as a supplementary method to enrich the focus groups' findings. The focus groups provided breadth of data, while the CITs provided depth, so the methods were complementary. The focus groups served as the starting point in gathering evidence, including the motives that led to and the actions used when playing up or downplaying emotions, while the CIT responses were used to capture the depth of such events, through reports of incidents in each of the four matrix facets. Thus, the two methods were used in parallel, as suggested by Mazzarol et al. (2007), whose research objectives (investigating a concept in depth and identifying related antecedents and situational conditions) mirrored this study. 3.1 The focus groups\n4. Findings: 4.1 The CEREBs matrix\n5. Discussion: 5.1 Theoretical implications\n", "role": "user" }, { "content": "Purpose: Despite recognition of the importance of emotions and emotion regulation in service encounters, emotion regulation has been generally studied from an employee perspective. This study investigated customer emotion regulation behaviours (CEREBs) in face-to-face service encounters; arguing for a more nuanced approach through an emotion regulation matrix representing the playing up and downplaying of positive or negative emotions. Motivational factors and service-related situational conditions that influence the likelihood of emotion regulation were also examined.\nDesign/methodology/approach: Four focus groups and the critical incident technique method were used to obtain data from people who had interacted with service employees within the previous six months.\nFindings: There was support for emotion regulation in the four facets of the emotion regulation matrix. Five CEREB dimensions, including verbal behaviours and facial expressions, were evident. Motivational factors and situational conditions that impacted on customer emotion regulation in service encounters were also identified.\nResearch limitations/implications: The findings were based on two qualitative methods. A quantitative approach should be used to further validate the suggested framework.\nOriginality/value: Most research on emotion regulation has focused on employees. We examined the phenomenon from a customer viewpoint and in a service encounter context. As customers are not bound by employment rules and conventions, a wider range of emotion regulation behaviours were found. The study used the four-faceted emotion regulation matrix to investigate this, developing a conceptual framework that provides a foundation for future research.\n", "role": "assistant" } ]
Can family characteristics influence the future labor situation of children? Evidence for Spain
[ "Spain", "Unemployment", "Intergenerational transmission", "Labour market", "Household composition", "D10", "E24", "I32" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Major social changes in the institution of the family in Western countries have resulted in a process of separation in the household, with rising divorce rates and growing numbers of single-parent families (Cherlin, 2002; Manning et al., 2014)[1]. Figure 1 shows the evolution of the proportion of single-family households and married and unmarried couples, in recent years in Spain[2]. Although living with a partner still appears to be the favored state for Spanish individuals, the number of single-family households has increased during the period of study, and does not appear to be slowing down[3]. The relationship between the composition of the household and economic well-being is obvious, since poverty rates vary dramatically, depending on family characteristics. In Spain, 42.2 percent of single families were at risk of poverty in 2016, while this percentage was just 25.5 percent in the case of married couples with children (INE, 2011/2016). These changes not only affect couples' well-being, but may also have implications for their children's well-being, who receive fewer parental inputs than their counterparts who live with both parents at home (Amato, 2005; McLanahan and Sandefur, 1994; Mencarini et al., 2017). Recent studies have focused on the importance of fathers, who are less likely to be involved in their children's lives when they are divorced, or not married (Hofferth, 2006; Cabrera and Tamis-LeMonda, 2014). Moreover, poverty entails challenges and situations that require a greater effort with only one available parent (Oliker, 1995; Edin and Lein, 1997). Thus, it is not beyond the bounds of possibility that household composition not only affects children's economic well-being during their childhood, but also in their adulthood.\nTheoretical and institutional background: Different researchers use different definitions of family support, but it is often used to refer to emotional, practical, socializing, advice and financial support (Fingerman et al., 2009; Vaux, 1988; Vaux and Harrison, 1985). This paper adopts this view of support. Given that the absence of one of the parents entails receiving fewer parental inputs, and even when both parents are present, financial difficulties imply receiving less support, our goal is to analyze whether this lack of support during childhood has implications for future labor market outcomes.\nData: We use data from the Survey of Living Conditions (SLC) provided by the Spanish Statistical Institute (INE) for the year 2011, which is the latest year providing information about family characteristics when individuals were teenagers. The SLC provides rich information that allows us to identify individual work status, as well as the specific characteristics of each household during individuals' adolescence, such us the composition of the household and the labor status of the parents. In the first analysis, our main explanatory variable is measured as both parents present in the household during individuals' adolescence. In a second analysis, the variable is defined as the unemployment status of the parents during the individual's adolescence. In this setting, our goal is to study whether individual behaviors in labor markets may be determined, although not exclusively, by the family support received during their teenager years. Our main sample contains 13,489 observations of Spanish individuals aged 26-60 years old, who report information about their household composition when they were teenagers and who are in labor force. A reduced sample formed by 3,150 observations has been used in our second analysis. The variation in the sample size is due to the fact that we restrict our sample to those individuals whose parents were in labor force.\nEmpirical strategy: Our goal is to analyze whether family support, measured through the presence of both parents in the household and parents' labor status, when individuals were teenagers, can influence their current situation as adults in the labor market. Thus, if family characteristics do not play a role here, the presence of both parents in the household during adolescence or the parents' unemployment status should have no impact on individuals' current unemployment situation as adults. On the other hand, if family support does play a role in labor arrangements, we would expect to detect a relationship between the behavior of the respondents and that of their parents during their teenager years. To test this issue, we consider the Quantity-Quality methodology of Becker-Lewis (Becker and Lewis, 1973) and estimate the following probit model:(1)\nResults: The effect of both parents present during adolescence on the probability of being unemployed\nConclusions: The aim of this paper is to show how recent changes suffered by Spanish households can affect the future unemployment situation of children. The dramatic consequences for the Spanish labor market after the recent economic crisis show the importance of studying patterns of unemployment, and how they can affect subsequent generations. Additionally, it is increasingly common to find single-parent or divorced families, and prior research has found negative consequences for children's well-being of not living with both parents at home. In our study, we focus on children's future well-being. We find that individuals' success in labor markets may be determined by their family characteristics when they were teenagers. Specifically, our results show that those individuals living with their parents during childhood are less likely to become unemployed in the future, and those whose mother was unemployed during their adolescence are more likely to be unemployed. We make use of the attachment theory postulated by Bowlby (1969, 1980) to explain the channel through which adolescent' outcomes in the labor market may be determined by the family support received in the past.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to analyze whether family support, measured through the labor status of parents and the presence of both parents in the household during adolescence, may be an important determinant of unemployment in Spain.\nDesign/methodology/approach: To address this issue, we follow the Quantity-Quality model of Becker-Lewis (Becker and Lewis, 1973), using data from the Survey of Living Conditions (2011).\nFindings: First, the results show that individuals living with both parents at home during their teenage years are less likely to be unemployed in the future. Second, the authors find evidence of the intergenerational transmission of unemployment outcomes, and that the unemployment status of the mother is strongly transferred to the child. Additionally, the authors extend this work to an analysis of other labor characteristics, finding that family support is an important determinant of self-employment and temporary employment. The findings are robust to controls for observable and unobservable characteristics by region, and to the use of different subsamples.\nOriginality/value: This study can be considered as first evidence of the effect of family support during adolescence on the Spanish labor market.\n", "role": "assistant" } ]
The link between perceived high-performance work practices, employee attitudes and service quality: The mediating and moderating role of trust
[ "Greece", "Trust", "Banks", "High-performance work systems", "HPWS", "Employee outcomes" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Over the last two decades, there has been a vast amount of research linking high-performance work systems (HPWS) with various firm-level performance outcomes (e.g. Huselid, 1995) across different industries and regions (e.g. Messersmith and Guthrie, 2010). Taking a look at the past two years, HPWS research still lies at the forefront of the strategic human resource management (SHRM) literature (e.g. Chang, 2015; Cooke et al., 2016; Van de Voorde et al., 2016; Zhang et al., 2016). On the other hand, and despite the positive effects with which HPWS has been attributed, there is still a gap in the literature with regard to the mechanism through which HRM practices influence performance (Takeuchi et al., 2007, p. 1069). In more detail, researchers suggest that there is a need to understand not only the \"what\" and the \"why\" of the impact of HRM, but also the \"how,\" which is known as \"the black box\" problem (Messersmith et al., 2011). The latter is considered as one of the key issues requiring further attention in the field of HRM. Thus, researchers call for more theory and research on the intervening mechanisms that may contribute to explaining the impact of HRM practices on organizational outcomes (Innocenti et al., 2011, p. 304; see also Van de Voorde and Beijer, 2015, p. 62).\nTheory and conceptual framework: The relationship between HPWS and trust\nMethod: Sample and procedure\nMeasures: HPWS\nResults: Table II shows the standard deviations, reliabilities (in parentheses) and correlations among the study variables.\nDiscussion and conclusions: The present study contributes to the growing HRM literature by shedding some additional light on the underlying mechanisms of the \"black box\" workings. Specifically, by following the empirical study of Kalleberg et al. (2004) as a theoretical framework and focusing on the Greek banking sector this study examines the effects of employees' perceptions of HPWS on service quality through the mediating roles of trust and employee attitudes. Taking into consideration the scarcity of studies examining the role of trust in the broader HPWS - organizational performance relationship, the present findings provide useful insights at both a theoretical and a practical level.\nImplications for managers and practitioners: The findings of the present study provide important practical implications for organizations, managers, and practitioners.\nLimitations: In this study, there are some limitations. First of all, as in all cross-sectional studies, although we tested for CMV and found none, there is the potential that CMV did influence our results. Similarly, the cross-sectional nature of the study cannot rule out the issue of reverse causality. Taking the latter into consideration, high firm performance outcomes usually translate into organizational health, which might have a positive effect on employee satisfaction and commitment, and thus onto employment security (Paauwe and Boselie, 2005). Hence, satisfied and more committed employees might indicate greater levels of HPWS in place. Thus, a longitudinal study would be preferable not only to rule out CMV, but also to uncover the dynamic influence of HPWS (Takeuchi et al., 2007). Indeed, longitudinal studies are in a better position to make causal statements and provide a stronger test of the hypothesized relationships. Second, in this study we examined only one level of the workforce - front-line employees. However, the different perceptions between managers and employees should be taken seriously into consideration, especially in organizations that are characterized as multi-level systems. For instance, employees might have different perceptions of the nature and extent of the HR practices used, whereas managers' perceptions might not bear any relationship to what actually occurs (Ramsay et al., 2000). Moreover, in complex organizations, there are potentially problems of agreement within the management hierarchy and between management and operating employees. Thus, the adoption of a multi-level approach that uses multiple raters of HRM practices to elucidate the perspectives of managers and employees and the roles they play in the use of HRM is required (Ang et al., 2013, p. 3089). Third, although data were collected across four different bank branches, we chose not to perform a multi-group analysis. In detail, each bank branch provided 80 responses (on average). Hence, and based on the small sample size per bank, we believe that a multi-group analysis would not add value to the study's conclusions. Fourth, for this study we chose to use the HRM practices used by Kalleberg et al. (2004) as representative of HPWS. However, some of these practices were measured via dichotomous indicators. Thus, there is the possibility that dichotomous indicators are not in position to capture sufficiently the existence of a specific HR practice in place. Finally, the analysis was conducted using data from Greek bank employees. Taking this into consideration, although our results are consistent with most of the international literature examining the impact of HPWS on employee outcomes (e.g. Macky and Boxall, 2007; Paauwe and Boselie, 2005), the generalizability of the present research might be limited to Greek banks that incorporate similar HR management practices. Indeed, as Raineri (2016, p. 24) underscores, HR practices might be sensitive to cultural or legal particularities. Hence, the further the HPWS literature moves away from a specific context, the more HPWS practices need account for legal, market, and socio-cultural variations.\nConclusion: Based on a sample of front-line employees working in the Greek banking sector, this study examined the mediating and moderating role of trust between employees and management in developing and maintaining work-related relationships that ultimately lead to enhanced productivity, as was measured by employees' perceived service quality. Taking into consideration the Greek context in which this study took place, the banking sector which is generally under-examined across the HPWS literature, and the limited investigation of the mediating and moderating role of \"trust\" in the HPWS - service quality relationship, it could be argued that this specific research enriches our understanding of the \"black box\" phenomenon that links HPWS to individual performance.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to focus on the Greek banking sector and examine the indirect effects of high-performance work systems (HPWS) on service quality. Specifically, this study examines the effects of employees' perceptions of HPWS on their trust toward their managers, as well as on service quality, through the mediating role of employee outcomes (measured by job satisfaction and affective commitment). In addition, trust is also tested for its role as a potential mediator and moderator in the relationship between HPWS and employee outcomes.\nDesign/methodology/approach: Partial least squares structural equation modeling was used on a sample of 350 front-line employees working in the Greek banking sector.\nFindings: The findings showed that employee outcomes mediated the relationship between HPWS and service quality. On the other hand, although trust mediated the relationship between HPWS and employee outcomes, no support was found for its moderating effect. Finally, although not included in the initial analysis, trust was found to play a mediating and moderating role in the relationship between HPWS and service quality.\nPractical implications: This study highlights the vital role that a \"trusting\" work environment has to play on employee attitudes and outcomes. As this study shows, the positive employee behaviors along with their willingness to accept and support organizational goals influence their level of productivity.\nOriginality/value: This study sheds lights on the mediating and moderating role of trust in the relationship between HPWS, employee outcomes, and service quality. Finally, implications are drawn for organizations, managers, and practitioners.\n", "role": "assistant" } ]
Fresh thinking in services marketing: contemporary, cutting-edge and creative thoughts
[ "Services marketing", "Theory development", "Service research" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nWhy are Fresh Ideas Important?: What do the following ideas have in common?\nEnhancing theory development in service research: We start with five essays resulting from an international network event that took place in Karlstad University in September 2014 on the theme of \"Enhancing Theory Development in Service Research\". The essay, by Gustafsson et al., which should be read first, provides the aims and rationale for the event: a joint commitment to focus on revitalizing the discussion on relevance and rigor as a path forward for service research, and the inclusion of a model of rigorous and relevant research. McColl-Kennedy et al. offer fresh perspectives on the extremely topical theme of customer experiences. Not only that but also the authors have set an imaginative research agenda for future research on customer experience, which sets the scene for the design of innovative research on this theme. Innovation, in the form of service innovation (a current hot topic in service(s) research), is the theme of the essay by Witell et al. They introduce a model that highlights the need for a change in focus of research on service innovation; in particular, to encourage research that addresses the level at which the innovation is directed, and to include failures, as well as successes. Wuenderlich et al. take us into the fascinating, and increasingly important domain of \"smart services\"; technology-enabled services that allow \"for real-time data collection, continuous communication, and interactive feedbacks\" in both business-to-business (B2B) and business-to-consumer (B2C) contexts. The essay offers a wealth of challenging research questions which should inspire fresh thinking in the future of this research stream. Finally, the essay by Aksoy et al. puts forward a powerful argument for the use of relative, rather than absolute measures in service(s)-related research. The authors provide an extremely helpful table to enable researchers to adopt a relative approach to measures of concepts, such as overall satisfaction, switching barriers, service employees' perceptions and attitudes and many more. The five essays, with contributions from eminent service researchers, provide excellent foundations for the cutting edge and creative thoughts of the future.\nLogics behind service research: Over the last 10 years, service(s) marketing researchers have been inspired into thinking more deeply about the fundamentals and potential of their discipline as a result of debates on logics - service-dominant logic, service logic and customer-dominant logic. Each of these perspectives is developing, and we are delighted to be able to offer latest thinking on all three in the same journal issue. Adopting a service-dominant logic perspective, Akaka and Vargo apply a service ecosystems lens on service context and service experiences, and provide an extended context of service which acknowledges directly earlier work on service encounters and servicescapes in the process of the development of dynamic frameworks that represent today's complexities. Fitzpatrick et al., adopting a service logic perspective, introduce a \"Relationality Framework\" to delve deeper into a theoretical and practical analysis of the value-creation process. Heinonen and Strandvik outline the theoretical and practical implications of adopting a customer-dominant logic approach on service and offer five essential foundations for customer-dominant logic. We also have an added bonus. Cheung and McColl-Kennedy put forward the case for the consideration of a new transformative service logic, based on a netnographic study of service following natural disasters. Not only do these papers offer latest thinking and contrasts between the logics, but they are also a repository for essential literature in these areas. Each of the logics embraces the notion of customer participation in service; a topic which is becoming increasingly important in its own right.\nCustomer participation in service: There are three papers on the theme of customer participation in service. Dong explores different forms of customer participation - those involved with designing a service, and those involved with producing a service - taking account of the mediating effect of customer expectation. The papers by Engstrom and Elg, and by Schuster, Proudfoot and Drennan, each examine customer (patient) participation in healthcare services. Engstrom and Elg, through diary records and interviews exploring lung cancer patient motivation, provide a typology of different styles of patient participation. Schuster, Proudfoot and Drennan, in the context of mental health services, focus on the extent of customer loyalty associated with technology-based (mobile phone and Internet) versions of the service. Although the focus of these three papers is on customer participation, they should also appeal to researchers involved with value co-creation, customer well-being and technology in service.\nCustomer engagement: According to Brodie et al. (2011, p. 253), customer engagement \"reflects customers' interactive, co-creative experiences with other stakeholders in focal, networked service relationships\". The topic of customer engagement is of contemporary interest to both service(s) marketing researchers and practitioners. Two papers in this issue provide fresh insights on strategic issues involved with customer engagement. Practitioners have considered gamification as a means to engage customers with a brand beyond transaction activities. Harwood and Garry offer novel empirical research on the effectiveness of virtual brand gamification for engaging customers. Companies wishing to implement corporate social responsibility (CSR) programs see them in the context of engaging customers, thereby enhancing customer loyalty. O'Brien, Wade and Soutar argue a strong case for organizations to devote greater time and attention to identifying which CSR issues to pursue to improve customer motivation for engagement with CSR-related strategies. Researchers of customer engagement will find the extensive lists of references for these two papers to be a valuable resource.\nService employee contributions: Service(s) marketing research has regularly contributed to the understanding of the role of the service employee in the provision of service. Four papers in this issue add to the understanding. Grace and Lo Iacono focus attention on the value co-creation process from the perspective of the internal customer, and put forward a well-developed conceptual model that identifies four broad dimensions - monetary, functional, emotional and social - that encapsulate the value perceptions of service employees. Huang and Rundle-Thiele derive a test to measure internal marketing practice based on a three-dimensional concept consisting of internal market research, training and communication activities. The service-profit chain is a well-established general framework linking employee satisfaction and loyalty to customer satisfaction and loyalty. Myrden and Kelloway put forward the case for a dynamic, day-by-day adaptation of the service-profit chain to define better the relationship between service employee attitude and performance, through which they identify the importance of leadership training. Front-line employees regularly receive positive and negative customer feedback in many forms. Conventional wisdom suggests that positive customer feedback has a positive effect on the employee. Nasr, Burton and Gruber challenge this view, and, based on a qualitative enquiry, demonstrate that positive customer feedback can have a negative effect on front-line employees. They put forward a model which identifies the types of negative impacts on employees of positive customer feedback, and outline the associated managerial challenges.\nThinking outside the box: The last three papers provide a fitting finale to the special issue, through demonstrating fresh \"outside - the - box\" thinking which makes services marketing so interesting and enjoyable. Ashman and Patterson work on the premise that research dissemination is a service, and that service researchers should practice what they preach. They provide a service arts perspective to the issue of making statistical models (especially those derived through structural equation modelling) more accessible to a wider audience. This particular service, they argue, can be achieved through the use of infographics, a device commonly used in many other fields. Gambling can be regarded in a negative way, with its association with addiction. Drawing on a study of Chinese tourists in Macau, Rosenbaum and Wong present a fascinating account of how gambling can be healthy. The results of their empirical work provide encouragement for researchers to seek out the extent to which consumers acquire restorative benefits in different consumption settings, even those regarded by some as representing \"sinful\" services. Metaphors, for example, service factory and service theatre, have a privileged place in services marketing research, providing both directions and inspiration. The honeybee metaphor, advocated by Brozovic, Ravald and Nordin, has the potential to be equally influential, given the current research concentration on service systems and service ecosystems. The notions of honeybees as part-time marketers, the honeybee hive as service provider, and flowers as customers, should serve as catalysts for reading this far-reaching paper.\nOther fresh ideas emerging in the services literature: This special issue does not claim to be the definitive work on the fresh ideas currently available in services marketing. There are many other ideas emerging that need debate, investigation and collaboration. Some of these additional areas offering fresh ideas for services are:\nConclusion: This summary of the papers does not do justice to all the fresh ideas which are being put forward by the authors, but we hope it succeeds in whetting appetites. We have certainly been inspired by the papers, and they have made our role as guest editors of the special issue a truly enjoyable experience. So let us advance our field of service marketing by questioning established logics and expanding the boundaries of what is known through the generation of fresh ideas.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to highlight and promote fresh thinking in services marketing research.\nDesign/methodology/approach: The topic of the special issue was deliberately chosen to encourage fresh ideas and concepts that will move the discipline forward. The accepted papers have been categorised for ease and convenience of reading by scholars and practitioners, with a short commentary on each category.\nFindings: There is a wealth of forward-thinking by service(s) marketing researchers that bodes well for the future of the sub-discipline.\nResearch limitations/implications: The special issue does not address fresh thinking in all areas of services marketing research. Other potential areas for fresh thinking are identified.\nOriginality/value: New thinking in a scholarly field is necessary to propel the discipline forward.\n", "role": "assistant" } ]
Comparative, cross-cultural, and cross-national research: A comment on good and bad practice
[ "Research", "Research methods", "Cross‐cultural studies", "Marketing" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The International Marketing Review (IMR) is a journal that encourages submissions of research that extend theory and practice in international marketing, and one kind of submission that we seek is empirical international comparative research. On this front, IMR welcomes and receives a high number of manuscript submissions in which the authors undertake some kind of international comparative examination, and this is reflected in the papers accepted and published in IMR. This places IMR at the forefront of marketing and business journals publishing research of this kind. Engelen and Brettel (2010), for instance, undertake a content analysis of comparative cross-cultural marketing research articles published in 14 marketing and business journals between 1990 and 2008. The authors identify IMR as one of the three journals (together with Journal of Business Research and Journal of International Marketing) that, combined, have published more than half of the cross-cultural marketing studies they identify in their sample.\nProblems with cross-cultural/cross-national research: There is often a common theme underpinning international comparative studies submitted to IMR that precludes them from publication. These papers are often desk rejected. The core issue is that:*the research objectives of the studies revolve around assessing how nation-level variables of interest (e.g. aspects of national culture such as individualism, collectivism, and so on, or economic factors such as level of economic development) differentially drive certain beliefs, attitudes, and behaviors in customers or individuals/groups in organizations; and yet*the researchers only test their theories using samples from two or three countries.These researchers often draw conclusions from their studies regarding the role that nation-level variables (such as culture) may have as antecedents to the beliefs, attitudes, and/or behaviors under scrutiny.Yet, as Franke and Richey (2010, p. 1 and 2) demonstrate, samples of two or three countries are not adequate for answering questions of this kind: \"Comparing small numbers of countries will actually often fail to reflect a trend that applies to countries in general, even when the overall trend is strong, and may falsely suggest a positive or negative trend, even when the relationship between variables is weak\".In essence, then, authors submitting to IMR international comparative studies with small numbers of countries sampled are seeking to develop and test some theory of how national-level factors differentially drive individuals and organizations in different countries, yet only provide two or three data points on which to test the theory. Studies of this kind are more akin to case studies, or exploratory research: they are rich and fruitful approaches that can dig into issues of interest, and help scholars develop new theories and research agendas. The use of small numbers of countries, however, does preclude generalizations, and so theory testing is not appropriate.\nInappropriate solutions: In some papers, researchers try to get around the issue of having a small sample of countries by framing the research objectives very narrowly. For instance, the paper may start out, in its positioning, by claiming that potential differences between two countries (Countries A and B) on some variable of interest (VOI) could be due to culture. A model containing a hypothesis is then developed. The formal hypothesis is framed very specifically to refer only to the Countries A and B; for instance, the authors may say:H1. Country A will have a higher level of the VOI than Country B.Of course, underpinning the logic of this formal hypothesis is an implicit and unspecified hypothesis that some cultural dimension causes variance on the VOI. Herein lies the problem for the authors: although they have tried to avoid developing a general hypothesis about how culture shapes the VOI, since they know that to do so would require data from more than two countries, they cannot get away from doing so - at least implicitly. And of course, all implicit hypotheses and assumptions should be explained and elaborated on when developing logical arguments.Later, the authors may conclude that H1 is supported (or rejected), based on their empirical analysis. By implication, the authors are also suggesting that the unspecified hypothesis propping up H1 (that \"some cultural dimension causes variance on the VOI\") is also supported (or rejected). The authors may even be explicit on this front, and suggest that their findings are generalizable, and may be extrapolated to other cultures. Here, again, the problem is that data from just two countries cannot provide strong support for the implicit hypothesis, and so implicit or explicit generalizations are not appropriate.When writing international comparative research papers, IMR authors should be wary of their own work if they find that they are discussing and justifying methodological issues (such as the countries to be sampled) in the introductory section, and if a hypothesis is built around predicting how specific countries (that are later sampled) will differ with respect to a VOI. Usually, this indicates that the authors are trying to frame the research objectives narrowly, in an attempt to avoid having to face the fact that they are developing a general theory of how a national-level variable (such as culture) affects their VOI.\nAppropriate solutions: Of course, there are situations where it is entirely appropriate for a research study to begin by specifying that it will focus on just two countries, say. As noted above, research undertaken in a small number of countries can provide rich insights into nation-level factors that are not well understood. Schindler's (2009) paper, in which he builds an exploratory model and predicts how price endings will vary across the USA and Japan, is one example. The author's work is exploratory, and our knowledge of how price endings may differentially be understood by consumers, and used by managers, is only just emerging. Empirical evidence regarding the prevalence of different price endings across two countries is an important first step in this respect. That said, it would be wrong to generalize from the two countries studied: Schindler is very careful not to do so and is at pains to point out that his study is only a first step, one that may eventually help understand price-ending effects across cultures. In essence, Schindler's work is a case study that can guide future empirical work.However, not all research can be framed as exploratory. Research objectives often demand descriptive research, and this requires large (enough) samples. In this case, the most obvious solution for authors is to ensure that they have a sample that is big enough to draw meaningful, generalizable, conclusions. On this front, Franke and Richey (2010, p. 16) suggest that:[...] some confidence can be placed in results that indicate strong relationships between variables based on seven to ten or more countries. Samples of 20 countries may be sufficient for reliable detection of above-average effects, and 50 or more may be needed for safe generalizations about the moderate effects that appear to be typical of relationships between national-level variables.Clearly, this requires more work on the part of the researcher: data collection in two countries is easier than data collection in 20 countries. Furthermore, there are the ensuing issues that may arise regarding measurement equivalence. It is more challenging to diagnose measurement variance and to establish measurement invariance across larger numbers of countries. Additionally, the more complex the model (e.g. if the model predicts that culture has a non-linear impact on the VOI), the greater the number of sampled countries needed. The choice of countries sampled is also an issue, since careful selection of countries is needed to ensure that the full ranges of the nation-level variables of interest are covered.For researchers who have data from two (or at any rate, a small number of) countries, but would struggle to get data from a sufficient number of countries to test their underlying theory of a nation-level variable's impact on a VOI, another option may be more pragmatic. This solution involves removing the nation-level variable from the conceptual model. The result would be that the researcher develops, say, a culture-free theoretical model, and as a result, the research objectives would no longer require the generation of data from a large number of countries - indeed, data would no longer be required from more than one country. The theoretical model must, of course, be interesting in its own right and must stand up as a significant contribution to international marketing knowledge. Having data from more than one country to test the model is not necessary - but it is a bonus, because it provides information on the stability of the model across national samples. Note, since culture (or whatever nation-level variable is of interest) is not explicitly included in the researcher's theoretical model, the researcher would not need to specify whether differences across the cultures would be expected in the magnitude of the VOI, or in the magnitudes of relationships between VOIs across the samples. Accordingly, any differences that are observed across the national samples would be discussed in the findings and discussions sections of the paper only. It is possible that, here, aspects of culture (or other nation-level variables) may be identified as potential reasons why the observed differences occurred and that this might point to future research needs (e.g. to examine such issues in more depth). If no differences are observed across the samples, then this just provides empirical support for the robustness of the theory originally proposed.Research that uses data from multiple countries is to be encouraged. At IMR, we welcome submissions of this kind. Nevertheless, it is important that researchers carefully think about how they present such research, so that they are not using data from a small number of countries to generalize inappropriately. With careful planning and positioning, international comparative marketing research studies can play an important role in driving new international marketing research agendas, testing theories for their cross-national stability and further developing our understanding of international marketing across markets and cultures.\n", "role": "user" }, { "content": "Purpose: - The paper seeks to provide guidance for those who have data from multiple countries and are considering writing a comparative or cross-cultural/national research paper.\nDesign/methodology/approach: - The approach taken is an essay format.\nFindings: - International comparative marketing research studies are often rejected from the International Marketing Review because authors develop theory for which they have inappropriate data.\nResearch limitations/implications: - Research models need to be commensurate with the data available. Models that claim to assess how nation-level variables of interest differentially drive beliefs, attitudes, and behaviors across countries, and yet only test the models using samples from two or three countries are not appropriate. Appropriate solutions for researchers seeking to undertake international comparative research are suggested.\nOriginality/value: - The paper provides researchers with insights into successful strategies for publishing international comparative research.\n", "role": "assistant" } ]
Examining "peer-to-peer" (P2P) systems as consumer-to-consumer (C2C) exchange
[ "Consumers", "Internet", "Resource sharing", "Buyer‐seller relationships", "Marketing", "Exchange" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The P2P software program Kazaa has nearly 1/3 of a billion registered users globally (Graham, 2004).\nLiterature review - managerial backdrop and extant research: A managerial overview of P2P exchange\nResearch model: Theoretical backdrop\nMethodological overview: Construct and variable measurement\nData analysis and interpretation: Partial least squares (PLS) and dataset sizing requirements\nImplications, limitations, and directions for future research: Implications\nConclusion: The departure P2P systems represent from traditional market-based exchange is remarkable. We conducted a study on the drivers of one pervasive form of C2C exchange - or the case of internet-enabled file swapping of pirated music amongst individual consumers through P2P systems. Harnessing the immense opportunities created by P2P exchange will indeed be challenging, but we believe this work give marketers an initial empirical glimpse at some of the individual-level antecedents and perceptual drivers underlying this important new exchange modality. Our findings indicated that several of the modeled antecedents - including aspects associated with the individual's prior experiential base, coupled with the convenience and assortment P2P systems provide the user - explain substantial variance in key endogenous constructs germane to the phenomenon. Given the fact that the diffusion and continued usage of P2P systems amongst consumers has yet to reach any sort of inflection point, the importance of conducting additional research and inquiry on P2P exchange as well as other emerging forms of C2C exchange both seem clear.\n", "role": "user" }, { "content": "Purpose: - The exchange of goods between individual consumers being facilitated by the internet and \"peer-to-peer\" (P2P) systems is severely altering the business landscape. In broader terms, P2P exchanges are a form of what has occasionally been labeled \"consumer-to-consumer\" (C2C) exchange. What little work there is in business research which has examined both C2C and P2P exchange has to-date been largely anecdotal or conceptual. This paper aims to address these shortcomings\nDesign/methodology/approach: - A model of antecedents germane to the P2P phenomenon is formulated and empirically tested which includes the consumer's satisfaction with, extent of usage of, and future behavioral intentions toward P2P exchanges as the focal endogenous constructs of interest.\nFindings: - The model is tested with perceptual and archival data from a survey of consumers active in one prevalent form of P2P exchange (i.e. the \"swapping\" of pirated music). The model explains significant variance in all three focal endogenous constructs.\nResearch limitations/implications: - The primary limitations include the study's exploratory nature and not being able to incorporate all potentially salient constructs and variables germane to the P2P phenomenon within the context of the study reported here. Additionally, and from a sampling perspective, a more heterogeneous sample may benefit future research.\nPractical implications: - The key insight emerging from the work is that the wide prevalence of P2P exchanges occurring globally between consumers has more to do with unexplored aspects of the phenomenon (such as its convenience; the \"connectedness\" it makes consumers feel to others) and seemingly much less to do with traditional marketing mix variables.\nOriginality/value: - While much has been written in the practitioner ranks and popular press about the problem of P2P exchanges, this work represents one of the first empirical studies of the antecedent drivers of the phenomenon. As is articulated in the paper's concluding sections, from a more embracing frame of reference, P2P systems and \"consumer-to-consumer\" exchanges are not the dire threats many marketing managers believe them to be. Challenges remain, however, for marketers to determine how to best harness these powerful new systems.\n", "role": "assistant" } ]
Household acquisition of healthy food away from home
[ "Calories", "Healthy", "Double-hurdle", "Expenditures", "Food away from home" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Over the course of three generations, US dining habits have undergone a major transformation, with Americans cooking at home less and dining out more. From 1954 to 2014, the share of food dollars Americans allocated to food away from home (FAFH) increased from 25 to 50 percent (USDA-ERS, 2016a). Defined as meals and snacks obtained from restaurants, schools and other foodservice establishments, FAFH expenditures exceeded $730 billion in 2014 and now comprise over 30 percent of American's caloric intake (USDA-ERS, 2016a; Lin and Guthrie, 2012; Stewart et al., 2005).\nData: Analysis of healthy FAFH utilizes data from the National Household Food Acquisition and Purchase Survey (FoodAPS). Sponsored by the US Department of Agriculture's (USDA) Economic Research Service and Food and Nutrition Service, FoodAPS is a nationally representative data set of household food purchases and acquisitions. Administered between April 2012 and January 2013, FoodAPS consists of one-week food diaries for 4,826 participating households, which document all food at home and FAFH acquisitions. Entry and exit surveys were also administered to obtain demographic and socio-economic data for each household.\nMethodology: The censored dependent variable problem\nDouble-hurdle model estimates: Cragg's double-hurdle model estimates are presented in Table III. The models explain a significant amount of the variation in households' healthy FAFH expenditure and calorie shares, with numerous significant coefficients and Wald kh2 test statistics being significant at the 1 percent level. The inverse mills ratio is not significant in the share decision of either model, indicating sample selection bias is not present. Furthermore, Pearson correlation coefficients indicate no presence of a multicollinearity problem. In order to facilitate interpretation of the double-hurdle model estimates, APEs of participation and conditional healthy FAFH expenditure and calorie shares are calculated and included in Table III.\nDiscussion: The primary objective of this study was to analyze households' acquisition of healthy FAFH from restaurants. Using the National Household FoodAPS Survey, Cragg's double-hurdle model is estimated in order to identify determinants of households' decision to purchase healthy FAFH when dining away from home, the share of households' FAFH expenditures allocated to healthy items and the share of households' FAFH calories obtained from healthy items. Based on the UK Food Standards Agency's nutrient profiling model, 37 percent of households' FAFH expenditures are allocated to healthy items and 32 percent of their FAFH calories are obtained from healthy items, respectively. Results indicate that households' acquisition of healthy FAFH varies with income, SNAP, FAFH purchase frequency, dieting, restaurant type, household composition, region and season. There is little difference in the impact of these factors on healthy FAFH expenditure shares vs calorie shares, suggesting that healthy FAFH expenditures proxy the contribution of healthy FAFH to a household's diet.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to analyze households' acquisition of healthy food away from home (FAFH) from restaurants. Specifically, determinants of households' decision to purchase healthy FAFH, the share of households' FAFH expenditures allocated to healthy FAFH and the share of households' FAFH calories obtained from healthy items are identified.\nDesign/methodology/approach: Using data from the National Household Food Acquisition and Purchase Survey, the UK Food Standards Agency's Nutrient Profiling Model is used to classify the healthfulness of households' FAFH purchases. A double-hurdle model is estimated to identify determinants of households' decision to purchase healthy FAFH and the share of their FAFH expenditures and calories allocated to healthy items.\nFindings: Households' acquisition of healthy FAFH varies with income, food assistance, FAFH purchase frequency, dieting, restaurant type, household composition, region and season. There is little difference in the impact of these factors on healthy FAFH expenditure shares vs calorie shares, suggesting that healthy FAFH expenditures proxy the contribution of healthy FAFH to a households' diet.\nPractical implications: Results suggest that increased availability of healthy FAFH may need to be supplemented by targeted advertising and promotions, revisions to nutrition education programs, improved nutrition information transparency and value pricing in order to improve the dietary quality of households' FAFH acquisitions.\nOriginality/value: This study is the first to analyze household acquisition of healthy FAFH.\n", "role": "assistant" } ]
The degree of the uncertain reward and customer engagement: An explanation mechanism based on optimistic estimation
[ "Co-creation", "Customer engagement", "Optimistic estimation", "Uncertain reward" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Customer engagement (CE) is viewed as a strategy that mutually benefits buyers and sellers, and implies customers can engage in all business processes to create value for companies (Etgar, 2008; Yoon and Lee, 2019). The ability to transform customers into value co-creators has become a source of core competitive advantage for companies (Bendapudi and Leone, 2003; Naumann et al., 2017; Widjojo et al., 2019). Many studies have primarily focused on the stages of CE, such as designing (Franke and Schreier, 2010), manufacturing (Atakan et al., 2014) and post-evaluation (Bendapudi and Leone, 2003). However, empirical research thus far has scarcely focused on how customers engage to create value in the most important profitable stage - promotion - which is the first research gap, from the CE domain.\nLiterature review and hypothesis development: CE and value co-creation\nStudy 1: the effect of the DUR on CE: Participants and research design\nStudy 2: the reversed effect from the controllable lottery strategy: Participants and research design\nStudy 3: the attenuated effect from the time-delay reward strategy: Participants and research design\nGeneral discussion: Companies try to encourage customers to more proactively engage in value co-creation through various marketing strategies. This research considered literature involving uncertainty and OE to examine the effect of DUR on CE, particularly at the promotion stage. Further, this research distinguished the influences generated from different DURs. DUR affects customers' unconscious, positive perceptions of the reward and results in different levels of OE, which affects CE. This research's three studies demonstrated that, compared with LDUR, HDUR reduces customers' OE - and, subsequently, reduces CE.\n", "role": "user" }, { "content": "Purpose: As value co-creation has become a critical strategy for companies to gain competitive advantage, the purpose of this paper is to investigate the degree of the uncertain reward (DUR) and its impact on customer engagement (CE), particularly in the promotion stage, which has not been sufficiently explored. Further, optimistic estimation (OE) is examined as an underlying mechanism of the uncertain reward effect, as well as the impact of combining this with other marketing strategies: the controllable lottery and the delayed reward.\nDesign/methodology/approach: Three studies were conducted to examine the influence of DUR on CE, which included online experiments and a laboratory experiment. In total, 337 participants were recruited from China and the USA to enhance the study's reliability and validity.\nFindings: The research demonstrated that a high-degree uncertain reward led to less OE than a low-degree uncertain reward (LDUR), which subsequently decreased CE (Study 1). However, when other marketing strategies were combined - the controllable lottery (Study 2) and time-delay reward strategy (Study 3) - the uncertain reward effect was reversed (Study 2) or attenuated (Study 3).\nResearch limitations/implications: The current research only presents two possible reward amounts and independently explores the influence of two popular marketing strategies. Future research can explore customers' responses to engagement when they face multiple rewards and thoroughly investigate the influence of other social or psychological factors.\nPractical implications: Firms could apply an LDUR to enhance CE effects. Furthermore, this could be done at a low cost by empowering customers' controllability. Nonetheless, firms should be cautious with trade-offs when using time-delay reward strategies.\nSocial implications: The research contributes to establishing networks of customer-company and interpersonal relationships, as well as fostering closer social ties and social harmony.\nOriginality/value: This research offers not only initial research on CE in the promotion stage, but also a novel psychological perspective in CE literature. Meanwhile, the study provides substantial value in guiding managers to effectively transform customers into value co-creators.\n", "role": "assistant" } ]
Improving contractor social networking on IBS infrastructure maintenance projects: a review
[ "Innovation", "Technology", "Integration" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: \"The age of technology and the communication revolution have provided the construction industry with new opportunities to advance contracting and grow business. It's not surprising that companies of any size present their new services on the social media today\" (Clark, 2014). Many construction companies such as industrialised building system (IBS) infrastructure maintenance contractor aim to provide their certified quality product and firms as valuable as possible in order to meet a client's needs on their services offered. IBS infrastructure maintenance involves the critical and unique prefabricated components and onsite installation that utilises techniques, products, components or infrastructure systems from the structural classification (e.g. bridge, tunnel and dam) and should need critical contractor networking attention towards their specialisation services delivery information on websites. There are many factors affecting contractor social networking performance on such projects. Kostora (2015), Ruan et al. (2012), Larsen (2011) and Chowdhury et al. (2011) suggested that the main reasons for poor contractor social networking performance on websites were fault information exchange, less working relationship, general description of maintenance knowledge exchange in a tweet, ignoring contractual relationship and using less task dependency in maintenance activities to manage project process that turn client or participant misinterpretation from an online engagement easily. Online engagement is commonly defined as the interaction between construction team across a range of contractor social networks. Creative (2015) also suggested that inaccurate content production including content strategy was a major cause of misconceptions communication. Thus, it would seem that poor communication content is a major cause of poor contractor social networking performance in IBS infrastructure maintenance projects. The conflict of interest is potential to be emerging with the communication misconceptions of services-based specifications and impact of the client's satisfaction (Glick and Guggemos, 2009). In addition, the clients facing difficulty to consider the specific contractor through generic communication contributed to the ineffective marketing approach such as trust and relationship development while the contractor's social network services delivery cannot have refined it accordingly (Blismas and Wakefield, 2009).\n2. Research methodology: This research is to develop a system to improve the contractor social networking websites used in social networking processes at IBS infrastructure maintenance projects. The contractor social networking processes are to identify avenues of team structure optimisation, to assess and plan the IBS infrastructure maintenance implementation. The new system is expected to integrate the contractor social networking processes and information database to provide efficient mechanisms of contractor social networking websites at IBS infrastructure maintenance projects. The literature review was conducted at the beginning of the research to examine and analyse the contractor social networking practices and integration of ICT into the contractor's services delivery on websites. Second, the multiple case studies were to review the current practice of contractor social networking and ICT implementation in the real situation. The causal explanation of the problems was also investigated in order to establish the requirement for facilitating particular social networking processes to be transformed into the system approaches in managing maintenance at IBS infrastructure. The next stage involved the building of process prototype model for in-depth understanding of the conventional processes and to propose the particular flow of process to be transformed into the ICT-based system.\n3. Contractor social networking in infrastructure maintenance projects: SNA is an important function of analytical tool in the contractor social networking research in order to provide indications of knowledge integration collaborative working and effective communication in infrastructure maintenance projects (Loosemore, 1998; Chinowsky et al., 2008; El-Sheikha and Pryke, 2010; Larsen, 2011). Cross and Prusak (2002) and Cohen et al. (2001) defined SNA functions which include providing a method to understand informal networks within and between organisations, managing the informal networks systematically, supporting collaboration, commitment, ready access to knowledge and talent, and coherent organisational behaviour. The less effective of SNA such as improper handling and information management of contractor's services delivery on websites during organisation and coordination of infrastructure maintenance projects will influence the trust principle, project progress and the quality services. Kostora (2015) stated that posts for document element and knowledge section in information management may range from 15 to 85 per cent of total project's organisation and coordination posts. In addition, Ireland (2010) indicated that almost 91 per cent of the total project planning post of any industrial organisation consists of project's organisation and coordination posts. Therefore, there is a need for efficient contractor social networking in order to control information technology post productivity in infrastructure maintenance projects.\n4. IBS infrastructure maintenance projects: In most of the IBS infrastructures, there is a lack of measurement tools available for social networking process to contractors in Malaysia to enable them to understand level of assessment of contractor social networking websites practices. The social networking processes involved the identification, assessing and planning of management for infrastructure structure and facility. The existing literature revealed that the app has contributed to the crucial problems in bridge resilience for the IBS infrastructures as Maintenance and Development Unit (UPS) (Venkittaraman and Banerjee, 2014). The maintenance management issues evolved to the point that they affect the maintenance efficiency such as inadequate technical knowledge, shoddy workmanship, improper assembly of the components and poor quality IBS products (Zhou et al., 2010). However, the approaches to address the problems are undertaken by staff through proper prioritising of bridge maintenance, the improvement of the standard inspection with the detailing specification data of building defects and to allocate the applicable financial contingency for maintenance control such as leaked concrete and structure crack.\n5. Contractor social networking practices in IBS infrastructure maintenance projects: With the maturing of contractor social networking tools and platforms, an increasing number of smart mobile device application (app) services are being adopted in various industry functions, including real estate, waste management, transportation, supply chain and maintenance management (Sattineni and Schmidt, 2015). Nevertheless, some industry practitioners still hesitate to adopt this innovative tool. According to the survey conducted by Ekow and Kofi (2016), maintenance participants did not identify much value in using app, except for the last few years. However, according to Zheng et al. (2016) and Azhar and Cox (2015), the actual benefits of app along with its team networking realisation capabilities have now comprehended and utilised to the best benefits of all stakeholders of an IBS infrastructure maintenance project such that contractor social networking can be integrated into the project management and working process, which also can be used by most AEC sector for maintenance quality control and efficient information utilisation. Currently, in the IBS infrastructure maintenance project worldwide, work has been carried out into the use of app in manufacturing, components tracking, waste reduction and supply chain management (Son et al., 2012; Davies and Harty, 2013; Kelm et al., 2013; Kim et al., 2013). Apps contain the information needed for particular phases of a building's life cycle (scheduling, analyses, cost evaluation, etc.) and should offer construction new opportunities to improve the communication and collaboration between participants through higher interoperability of data. It can provide potential savings (cost and efficiency), and can also be suitable for maintenance and support in the following ways (Joyce, 2011a; Colonna, 2012; Razmerita et al., 2014):\n6. Improving contractor social networking in IBS infrastructure maintenance projects: Currently, in the IBS infrastructure maintenance project worldwide, work has been carried out into the use of app in contractor social networking such as BIM 360 Field, iSite Monitor, Plan Grid and Aconex Mobile. App tools contain the information needed for particular phases of a building's life-cycle (scheduling, analyses, cost evaluation, etc.) and should offer construction new opportunities to improve the communication and collaboration between participants through higher interoperability of data (Joyce, 2011a, b). It can provide potential savings (cost and efficiency), and can also be suitable for faster information sharing to a larger network of people and organisations (Chen and Kamara, 2011). However, Sattineni and Schmidt (2015) state that the app is yet to be mainstream as a communication tool throughout maintenance industry and also attribute to the lack of understanding due to technical difficulties on parameters and indicators of measuring the resilience. The weakness of app in the IBS bridge infrastructure maintenance industry is that it is only limited to the level of awareness and the utilisation of contractor social networking applications on app devices concerning the sustainable development towards vulnerable communities. In particular, the improvements of perceived usefulness and perceived ease of use of maintenance-related applications on maintenance professional's intentions to use such applications offer fruitful avenues for future research. The researchers stated that proper resolution strategies of contractor social networking are essential during the maintenance period in order to get a better improvement on its acceptance and utilisation by maintenance professionals (Silvius, 2016).\n7. Need for improvement: An initial assessment of the tools and techniques currently in use in contractor social networking websites suggests that most of them are under development with a few being used on a commercial basis. Based on the literature, this research is intended to improve information management of contractor social networking on IBS infrastructure maintenance projects through the use of app device. The relevance and capability of the app device to improve contractor's certified quality product and firms as well as maintenance specialisation services have been confirmed and verified through the review of the previous section. The app clearly covers a wide realm of information categories such as company branding, disseminating project news, information on job hiring and client networking. This will be able to create the environment in facilitating contractor social networking to become more easy and effective (Anumba and Wang, 2012). However, more sophisticated contractor social networking websites solutions which accord with the needs of IBS infrastructure maintenance schemes in the future are anticipated to use multiple aspects of communications and collaboration technologies (e.g. marketing, connections, support, education and recruiting) such as integration of social media platforms that constitute vulnerabilities and resilience characteristic (Kudos BIM 360 Field and Plan Grid Publons) (refer Figure 3 and Table II).\n8. Lessons and enlightenments: The information management and the application of contractor social networking databases should be paid high attention. Because of the inadequacy of app tools and limitation of understanding about the management of disaster risk function mechanism, the design of contractor social networking structures is very important. The application of modern app tools such as BIM integration can avoid or alleviate the poor service delivery of main infrastructure.\n9. Conclusion: A brief overview of contractor social networking practices on IBS infrastructure maintenance projects was clearly presented in this paper. The findings from the literature show that the researchers felt that the functionality of the app device was inappropriate for addressing social networking problems, and there are some limitations, which would need to be addressed in the future. It was identified that the main limitation to the use of the app device on IBS infrastructure maintenance projects was primarily the lack of measurement tool for bridge resilience. The realisation of full potential of social media resources should be maximised during communication and collaboration in order to avoid loss of profit for most maintenance contractors. There was also a need for proper training for the effective implementation of the app device in such projects. The development of current social media resources is part of basis for forming an effective framework which then will be used for supporting the improvements of contractor social networking practices. The next stages of this research will investigate the further innovation of app devices on contractor social networking including the extent and nature of mechanisation of the IBS infrastructure maintenance processes (framework development of disaster in infrastructure resilient) in developing such a device in the future.\n", "role": "user" }, { "content": "Purpose: A key factor adversely affecting contractor social networking performance is the improper handling and information management of contractor's services delivery on websites. Contractor social networking is particularly problematic on industrialised building system (IBS) infrastructure maintenance projects where contractor's certified quality product and firms are not matched with maintenance specialisation services. The paper aims to discuss this issue.\nDesign/methodology/approach: This paper reports on the early stages of research which is developing a new information and communications technology (ICT)-based approach to managing contractor social networking on IBS infrastructure maintenance schemes. As a precursor to this work, the paper reviews current contractor social networking websites practices on IBS infrastructure maintenance projects and explores the ICT tools and techniques currently being employed on such projects.\nFindings: The findings reveal the need for more sophisticated contractor social networking websites solutions which accord with the needs of IBS infrastructure maintenance schemes.\nOriginality/value: The paper concludes by presenting a research framework for developing such a system in the future.\n", "role": "assistant" } ]
Information governance requirements in dynamic business networking
[ "Metadata", "Information security", "Information governance", "Information quality", "Business networking", "Inter-organizational business process" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The competition in today's globalized markets increasingly forces organizations to provide mass customized integrated solutions for their customers (Vargo and Lusch, 2004; Tukker and Tischner, 2006). To achieve competitive advantage, organizations need to concentrate on their core competencies and outsource others (Prahalad and Hamel, 1990). This situation increasingly stresses the importance of dynamic business networking (BN) to respond to emerging customer requirements (Mehandjiev and Grefen, 2010). A dynamic BN requires the sensing of customer needs and a rapid response to these needs through the agile orchestration of resources distributed among parties (Sambamurthy et al., 2003). This highlights the importance of information-intensive and dynamic inter-organizational business processes (Grefen et al., 2009). The information-intensity points to a dynamic evolution of inter-organizational business process to respond to environmental requirements (Reichert and Weber, 2012). Information-intensity is increasingly enabled by the internet of things technologies - like sensor, RFID, and social media technologies - that create more and more digital information about real world objects (Lohr, 2012). This information can be generated by suppliers (such as logistics-related information) or by customers (such as information implying customers' experience in social media). Although the access to globally distributed information enables a BN to exploit environmental opportunities, it also can result in emerging issues such as unsecured information access and low-quality information products (Tallon, 2013; Haug et al., 2013; Silvola et al., 2011; Rasouli et al., 2015c). These issues can strongly disrupt the performance of a BN. So, they need to profoundly be recognized and responded by information governance (IG) mechanisms (Tallon, 2013).\n2. The approach for the identification of IG requirements: For the identification of IG requirements, as a first phase of this design science-based research, we investigate three potential research approaches, respectively a survey, a case study, and a systematic literature review (SLR). In this research, due to the specified research question that concentrates on a particular business situation (i.e. dynamic BN), a high consideration of the research context is needed. However, a survey's ability to take into consideration contextual factors in the exploration of a phenomenon is extremely limited (Yin, 2013). Case study research enables an in-depth investigation of contextual factors (Yin, 2013), but it cannot support the comprehensive identification of IG issues sufficiently. The SLR approach (Keele, 2007; Tranfield et al., 2003) enables a comprehensive search for relevant evidences of an issue within a context and so it fits best with our research approach. The SLR as an exploratory research approach enables reviewing relevant evidences to the specified research question sufficiently. The main limitation of this approach is that systematically reviewed evidences are originated from theory. Although within this approach the scientific rigor of the explored evidences to answer the specified research question can be supported, their practical significance needs to be investigated empirically. For the investigation of practical significance of the explored evidences we use case study research (see Section 4).\n3. The identified IG requirements: The identified IG requirements in dynamic BN and their sources are listed in Table II. These IG requirements are categorized within three main domains of IG, respectively, the IQ, the information security, and the metadata domain. The IQ domain indicates the extent to which information provided within a BN fits with stakeholders' needs (Wang, 1998; Otto et al., 2011). Based on Kahn et al. (2002), in the IQ domain both information product quality and information service quality can be distinguished. The former considers information as a product that needs to be produced by a manufacturing process with an end-product of information stored in a database. But, information service quality focusses on the activities which occur after information is stored as an end-product in a database, i.e. to enable consumers to obtain and use information. The information security domain underlines the protection of information confidentiality, integrity, and availability (Bishop, 2003). The metadata domain reflects information about information that enhances the usability and understandability of an information service (Khatri and Brown, 2010). The identified IG requirements, which are classified within the aforementioned three domains, are elaborated further.\n4. The evaluation of the identified IG requirements: We conducted a case study to investigate the practical significance of the identified IG requirements in a real world situation. Indeed, we aim to investigate \"how\" the dynamism in a BN results in the emergence of the identified IG requirements. The case study research approach is appropriate to respond to this \"how\" question, because it enables the exploration of a chain of in-depth evidences reflecting IG issues within a real-life dynamic BN (Yin, 2013). The ability of other alternative research approaches - like the survey research - is limited for the investigation of the complex context such as a dynamic BN (Cleven et al., 2009). In this case study, we explore information exchange issues resulting from the dynamism of networked business processes. A theoretically explored IG requirement is practically significant if it addresses a real-life information exchange issue (see Figure 3).\n5. Discussion and conclusion: Discussion\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to identify information governance (IG) requirements in the context of dynamic business networking (BN).\nDesign/methodology/approach: For the identification of IG requirements a systematic literature review is conducted. The practical significance of identified IG requirements is evaluated through a case study.\nFindings: A comprehensive list of IG requirements in dynamic BN is identified. These requirements are classified in information quality, information security, and metadata domains. The conducted case study demonstrates information exchange issues in a real world dynamic BN that reflects the practical significance of the identified IG requirements.\nResearch limitations/implications: Exploiting emerging market opportunities through a dynamic BN necessitates the realization of a comprehensive IG program within the network. Otherwise, information exchange related risks can interrupt the operations in a BN. In this research the authors concentrate on interactions between parties within a BN. The interactions with customers for the co-creation of value are not addressed directly. Although the conducted case study reflects the practical significance of the identified IG requirements clearly, more empirical study is needed for prioritizing these IG requirements.\nPractical implications: The governor of a BN needs to balance between the value obtained from dynamic networked interactions and the risk evolving from the dynamic inter-organizational information exchange.\nOriginality/value: The comprehensive list of IG requirements that are identified in this research can be used to develop an IG program that enables high quality and secure information exchange in a dynamic BN.\n", "role": "assistant" } ]
Coaching versus therapy in business management: Separating the practices, skill-sets and goals between two converging disciplines
[ "Coaching", "Counselling", "Occupational psychology", "Gestalt psychology" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: At what point is the line between executive coaching and therapy crossed, and exactly how do managers differentiate between the two disciplines? These are questions which Peter Bluckert's article, \"The similarities and differences between coaching and therapy\", addresses, while attempting to lend definitions to both. These questions are becoming more pertinent as time goes on, due to the increasing popularity of the coaching and mentoring system.\nConvergence and differentiation of approaches and thinking: Definitions which attempt to describe therapy as focusing on working with individuals, while coaching, on the other side, focuses on developing models of change, processes and procedures, are inadequate. The line between the two is less clear-cut, because many of the disciplines which inform both coaching and therapy overlap with each other. This is complicated further by the fact that many therapists, clinical psychologists and management trainers have gone into consulting and coaching, taking with them their skill-sets and professional practices. As a result, new models of coaching have been constructed which amalgamate these increasingly diverse disciplines. Recent analyses have come to define coaching as a mixture of coaching and counseling, making it more difficult than ever to separate the roles of coach and therapist.The demarcation issue between coaching and therapy concerns many coaches who have approached their profession without a psychological background. These coaches have not received sufficient guidance from existing literature. Such literature characterizes therapy as past-focused, designed to address painful, unresolved issues in clients with personal troubles or pathology; by contrast, coaching is typically defined as focusing on the present, being geared towards highly functioning, successful people who want to achieve even more. Because of these inadequate definitions, Bluckert highlights the key differences with greater clarity than they have previously received.\nKey differences: The assertion that therapy exists exclusively for people with pathology or troubled pasts, while coaching exists for highly functional and successful individuals, is misleading. People enter therapy for many and varied reasons - for example, self-exploration, better self-management or professional development - while it is equally true that many successful managers have personal problems and would benefit from therapy. This latter point is particularly pertinent for commercial positions which raise issues of high stress, addiction and depression.Clear daylight between coaching and therapy exists, however, when analyzing the ultimate intentions of the two disciplines. While both share goals of improving the individual's effectiveness and productivity in the workplace, therapy goes further in addressing non-work aspects of a client's life, which may involve in-depth exploration of their personal history and relationships. These issues may be only tangentially related to the business environment. Such personal guidance, which negotiates intense emotional experiences, demands skilful guidance from an experienced practitioner. Such levels of psychological expertise cannot be expected of coaches in a business environment, particularly if that coach has not come from a therapy-related background. From this fact alone, it becomes clear that another major difference between the disciplines lies in the fact that therapists require extensive training, typically far in excess of coach training. Coach training can vary from between a few days to a full year, in comparison with the years of training and studying required from qualified therapists.Other differences can be listed as follows:* Coaching is more results and action-oriented than therapy. Coaching exists with a three-way contract involving the manager (the person being coached), the organization and the coach.* Coaching involves processes rarely used in therapy such as feedback from managers, peers and subordinates. Therapy, by contrast, is more veiled behind a two-way confidentiality agreement.* The length of sessions. Whereas therapy sessions can be undertaken regularly within the hour or 50-minute frame, coaching sessions tend to last longer and to be spaced at longer intervals. Therapy sessions also take place away from the business arena in the therapist's consulting rooms, whereas coaching can take place in a typical business environment.* Coaches are often required to interact, to some extent, in the business culture of their clients, and partake in the complexities of organizational life. For this reason, coaches generally dress similarly to their clients and wear suits, while therapists dress more for their own, and their client's, comfort.* Fee rates. Coaches typically tend to earn as much as quadruple a therapist's fee.The final, more fundamental difference, however, lies in the \"presence\" brought with each position. The therapist's approach to his trade is more composed, thoughtful and enigmatic, in comparison with that of the coach who seeks to channel their energy to create a more action-focused result in the client. While the role of the therapist requires room for reflection and self-interrogation on a personal level, that of the coach seeks to energize and encourage the client towards a higher level of achievement in their professional field.\nThe way forward: Bluckert argues that coaches should be placed under professional supervision. It is almost impossible for a coach, who does not have a therapy background, to know when a client might require help beyond their professional remit. Supervision, which would provide a means of support, guidance and consultation for the coach with a qualified individual from the therapy discipline, would ease the pressure on the coach and allow them to spot those clients who require further help.Coaching and therapy therefore clearly share some similarities, but differ fundamentally in their ultimate goals for the client, their training requirements, and their typical practicing environments. Therapy tends to focus deeply personal issues within an individual, while coaching focuses more on people management, strategic planning, organizational change, business performance and personal development. However, as the field of coaching expands, there will be a greater differentiation in types of coach, including those who will be increasingly qualified to deal with therapeutic issues.\nComment: This review is of \"The similarities and differences between coaching and therapy\", by Peter Bluckert, Director of Peter Bluckert Coaching in Bingley, UK. It is a short and concise discussion, which clearly outlines the differences in the intentions, tools and skill-sets between therapy and coaching. Bluckert also attempts to suggest that professional coaching will increasingly require either consultation or training from qualified therapists in the future.\n", "role": "user" }, { "content": "Purpose: - The paper aims to review the latest management developments across the globe and pinpoints practical implications from cutting-edge research and case studies.\nDesign/methodology/approach: - This briefing is prepared by an independent writer who adds their own impartial comments and places the articles in context.\nFindings: - At what point is the line between executive coaching and therapy crossed, and exactly how do managers differentiate between the two disciplines? These are questions which Peter Bluckert's article, \"The similarities and differences between coaching and therapy\", addresses, while attempting to lend definitions to both. These questions are becoming more pertinent as time goes on, due to the increasing popularity of the coaching and mentoring system.\nPractical implications: - Provides strategic insights and practical thinking that have influenced some of the world's leading organizations.\nOriginality/value: - The briefing saves busy executives and researchers hours of reading time by selecting only the very best, most pertinent information and presenting it in a condensed and easy-to-digest format.\n", "role": "assistant" } ]
Consumer participation in last-mile logistics service: an investigation on cognitions and affects
[ "Co-creation", "Self-service technology", "Consumer participation", "Last mile", "Cognitions and affects", "Consumer logistics", "Self-collection service" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Encouraging customer participation (CP) in co-creating service values represents the new frontier in achieving a competitive advantage (Bendapudi and Leone, 2003; Mostafa, 2016). Instead of being passive recipients of service provisions, customers are viewed as proactive co-creators of service values, whereas the service providers are urged to assume the role as facilitators in the value co-creation process. CP allows the service providers to foster closer and more profitable relationships with their customers, and at the same time, allows the customers to influence specific changes in service offerings (Vargo et al., 2008; Mostafa, 2016).\nLiterature review: Consumer logistics\nConceptual framework: As the prior literature on consumer logistics is limited and largely conceptual, the current paper turns to the broader literature on service quality and consumer psychology to build its conceptual framework. Theoretical insights from attitude and affect theories are adapted to establish the theoretical premise. In addition, self-collection is viewed as a SST. As a critical element in the CP research, literature on the SST quality and the SST adoption is also referenced.\nMethodology: Survey design\nResults: In this section, an EFA on the preliminary data was first conducted. SEM was then employed for further data analysis. While EFA was employed to assess the dimensionality of all the measurement items, SEM was applied to confirm and validate the complex structural relationships. The data were analysed using IBM SPSS Statistics Version 23 for EFA, and AMOS version 22 for SEM with the maximum likelihood estimation.\nConclusion: Discussion and managerial implications\n", "role": "user" }, { "content": "Purpose: Increasingly, the logistics industry offers innovative solutions that interact with end-consumers directly. The purpose of this paper is to examine the consumer participation behaviour in co-creating logistics service values, using self-collection via automated parcel station as an example. Built on the synthesised insights from logistics studies and behavioural theories on consumers' attitude and affect, the effect of cognitions (what consumers think) and affects (what consumers feel) are investigated.\nDesign/methodology/approach: A total of 500 valid responses are collected from an online panel of respondents and the data are analysed using exploratory factor analysis and structural equation modelling.\nFindings: Consumers' affects towards participation are stronger motivations that not only intrinsically motivate consumers to participate but also exert an indirect influence via consumers' cognitions.\nPractical implications: To elicit consumers' affections, it is critical to create enjoyable (enjoyment), assuring (assurance) and secure (security) service experiences. On the other hand, an overly straightforward service offering (in terms of cognitive functionality), void of the aforementioned experiences, may discourage consumers from participation.\nOriginality/value: This research unveils consumer participation in co-creating logistics service values, contributing to studies on the emerging phenomenon of consumer logistics. A rebalancing of the logistics research from a utility-creation perspective to an experience-creation perspective has been advocated.\n", "role": "assistant" } ]
A business intelligence framework for sustainability information management in higher education
[ "Business intelligence", "Higher education", "Sustainability reporting", "Dashboards", "BI framework", "Strategic sustainability information" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: The increased competition in the higher education sector has resulted in a need for faster and improved decision-making at all levels of an institution (Stocker, 2010). Higher education institutions (HEIs) globally have realised that the effective analysis of sustainability information can improve sustainability management can be used as a tool to measure sustainability efforts and manage long-term risks and opportunities (Posner and Stuart, 2013). Improved sustainability management includes the process of sustainability reporting. The increased awareness of sustainability is emphasised by the number of sustainability reporting frameworks being implemented (Institute of Directors, 2009; Pina, 2011). These frameworks provide institutions with guidelines on how to report on sustainability, and examples are the Global Reporting Initiative (GRI), ISO 14000 series; the Triple Bottom Line; and the Sustainability Tracking, Assessment and Rating System (STARS). STARS is a reporting framework specifically designed for HEIs in North America [Association for the Advancement of Sustainability in Higher Education (AASHE), 2011], and although the GRI reporting framework was designed for companies, it does have modifications for HEIs with an additional category for education.\nResearch methodology and case study: The DSR methodology (Hevner, 2007) was the underlining research methodology followed in the larger study. The three research cycles of the DSR, the relevance cycle, the design cycle and the rigor cycle, were applied in this research in developing two artefacts. The first artefact produced was the BI framework for strategic sustainability information management in higher education (the Sustainable BI Framework for short), a theoretical framework to support strategic sustainability information management in HEIs. An extensive literature study was conducted to derive the theoretical framework. In conjunction with DSR, a case study was used. The case study used was a single case of an established HEI in South Africa where sustainability has been prioritised by strategic management (Nelson Mandela Metropolitan University, 2008). Large volumes of data relating to the sustainability of the institution are generated and recorded; however, the use of such data to manage sustainability at the strategic level is limited. The second artefact produced was a practical BI software tool, \"Sustainable BI\", which was developed to address problems with strategic sustainability information management.\nA business intelligence framework for sustainable higher education institutions: Muntean et al. (2013) proposed a BI architectural framework created specifically for HEIs, with six component layers, namely:\nArchitecture of a business intelligence dashboard tool for higher education institutions: The architecture of the Sustainable BI Tool incorporated all the framework components, namely, the operational data sources, the ETL process, the data layer, the reporting, analytical and monitoring layers and the presentation layer (Figure 2). Data across multiple sustainability categories were included. Tableau was used for the presentation, reporting, analytical and monitoring layers, as it is a leader in BI software (Gartner, 2014). Tableau was used to create the dashboards to visualise the information, and these dashboards can be viewed via the presentation layer through Tableau Online, Tableau's online cloud platform. However, BI does not provide a data platform; therefore, Microsoft SQL Server was used as the data platform. Several tools were used in the reporting, analytical and monitoring layers. The Sustainable BI Tool supports the creation of both CSV reports, typically viewed in Microsoft Excel, and PDF reports, typically viewed in PDF readers such as Adobe Acrobat Reader. Although Tableau allows the creation of predictive models, Tableau's predictive capabilities are limited; therefore, the statistical program R was used to create the predictive models.\nCapabilities of sustainable business intelligence: The tool, Sustainable BI, was developed as a desktop solution to support all the required capabilities for a BI solution: performance dashboards; OLAP capabilities; forecasting capabilities; reporting and filtering capabilities.\nEvaluation of the tool sustainable business intelligence: This paper reports on the two separate evaluations of the Sustainable BI Tool that were conducted. A heuristic evaluation (HE) was conducted first and is an informal usability evaluation in which feedback is gathered from users regarding a particular system according to several heuristic characteristics (Nielsen and Molich, 1990). The HE was used to gain an early understanding of the usability issues of the Sustainable BI Tool to address them prior to the usability evaluation. Issues identified in the HE were then addressed, and the tool was modified accordingly. After this, a full usability evaluation was conducted. All evaluations were conducted on desktop personal computers as the tool was designed for this platform.\nLimitations and implications: Existing frameworks for sustainability reporting provide organisations with guidelines on how to report on sustainability but do not provide the software or BI tools to produce integrated sustainability reports that consider all the stakeholders and allow for strategic predictive analysis. Previous studies of BI frameworks designed specifically for HEIs (Bosire, 2014; Muntean et al., 2013) are limited and do not consider the steps required for dashboard development or the details of the data sources in the four components of sustainability at an HEI. Also, they do not provide guidelines for the design of sustainability dashboards. Here, it might be useful to describe the conceptual approaches and ICT tools (including already existing prototypes) in terms of ICT-based sustainability reporting in HEI.\n", "role": "user" }, { "content": "Purpose: Higher education institutions (HEIs) face a number of challenges in effectively managing and reporting on sustainability information, such as siloes of data and a limited distribution of information. Business intelligence (BI) can assist in addressing the challenges faced by organisations. The purpose of this study was to propose a BI framework for strategic sustainability information management (the Sustainable BI Framework) that can be used in HEIs.\nDesign/methodology/approach: The research applied the design science research methodology whilst using a South African HEI as a case study. The problems with sustainability information management were identified, and a theoretical framework was proposed. In addition, a practical BI software tool was developed as proof of concept to address these problems and to assist with the management of strategic sustainability information in an HEI.\nFindings: The proposed sustainability BI tool was evaluated through heuristic and usability evaluations with senior management. The results indicated that the usability of the BI tool was positively rated and that the framework can assist in overcoming the constraints that HEIs face in effectively managing sustainability information.\nResearch limitations/implications: The research was limited to a single case. However, the theoretical framework was derived from and expanded on existing stakeholder theory, sustainability reporting theory and literature on BI dashboard development. The framework was implemented successfully in the Sustainable BI Tool prototype at the case study, and the results reveal in-depth information regarding information management for sustainability reporting in higher education.\nPractical implications: The Sustainable BI Tool is a solution that integrates data from multiple areas of sustainability and provides a single integrated view of the information to stakeholders. The information is provided through performance dashboards, which provide predictive capabilities to enable management to report on sustainability and determine if the institution is meeting its strategic goals. The lessons learnt can also assist other HEIs considering implementing BI for sustainability reporting.\nSocial implications: Improved sustainability reporting for HEIs provided by the BI framework can improve the environmental and social impact of the educational community.\nOriginality/value: This study provides the most comprehensive framework for guiding the design of a BI tool to assist in effectively managing sustainability information in HEIs.\n", "role": "assistant" } ]
The hidden risks in strategic account management strategy
[ "Strategic management", "Key accounts", "Buyer‐seller relationships" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Strategic account management (SAM) strategies, sometimes known as key account management (KAM), have evolved across the world as an approach to handling the issues raised by very large customers who dominate a supplier's sales and profit profile. Such customers are those described by the Strategic Account Management Association as complex accounts with special requirements, characterized by a centralized, coordinated purchasing organization with multi-location purchasing influences, a complex buying process, large purchases, and a need for special services. SAM involves dedicating significant corporate resources to a small number of special customers[1].\nBut why strategic account management?: SAM has grown because demands from large customers have caused suppliers to dedicate significant organizational resources to incorporate special value-adding activities (e.g. joint product development, business planning, consulting services) into their offering to the customer. On the one hand, in modern industrial markets, many buyer-seller situations are now characterised by fewer, larger and more complex purchasing organizations, with intense competition the norm. On the other hand, consumer goods markets have seen a radical power shift to retailers, suggesting that, increasingly, corporate financial performance depends on the effective management of large customers (Shapiro et al., 1998).Certainly, SAM is a strategic development which has become increasingly widespread in response to a variety of widespread customer and market pressures:* escalating levels of competition in most markets and consequently higher selling costs for suppliers;* increased customer concentration resulting from merger and acquisition activity, as well as competitive attrition in many markets;* growing customer emphasis on centralized strategic purchasing as a major contributor to enhancing the buyer's cost structure and building competitive success in their end-user markets;* active strategies of supplier base reduction by larger buyers to reduce purchasing costs; and* increasing exploitation by large customers of their position of strategic importance to their suppliers to gain lower prices and enhanced terms of trade (Capon, 2001).For many suppliers, however, SAM is far more than a sales strategy for major accounts - it is a progression towards a form of \"partnership\" or alliance with major customers, characterized by joint decision making and problem solving, integrated business processes and collaborative working across buyer-seller boundaries, described as a process of \"relational development\" (Millman and Wilson, 1989).It is this transition from simply focusing resources on a major customer toward what is a new collaborative model of buyer-seller relationships, which worries us. Our research and work with managers identify a checklist of potential negative consequences from SAM strategy, which appear to have been ignored by many companies and their strategy advisors.\nThe hidden down-side of strategic account management: Investing in strategic weakness\nConclusions and recommendations: We have attempted to contrast the apparently compelling case for strategic account management (SAM) models that develop collaborative and integrative relationships with major customers, with the potentially damaging traps for the unwary. In many situations, it appears that the adoption of SAM models is based on the suspect logic that the best use of a company's resources is to invest heavily in that part of the business (the largest customers) which has the lowest margins and the highest business risk. This is a somewhat unconvincing case for investment.Our logic suggests the search should be for alternative strategies that avoid the trap of high dependence on a small number of powerful key accounts. Some would probably suggest that this is a search doomed to failure - the most powerful customers control markets and are unlikely to surrender this control willingly. Yet, on the other hand, consider the potential disruption of the status quo in a market by the introduction of a new business model. For example, consumer and business computer users have voiced numerous complaints over the years about the product functionality of Microsoft offerings, and struggled in vain against the massive Microsoft market share in areas like operating systems and server software. In 2005, we can see the dramatic impact of Linux software - available free or cheaply - developed through a peer-to-peer network, in a business model that appears uninvolved with concerns like profitability. Microsoft increasingly looks like a company with a mid-life crisis, that has no effective response to Linux. However, more interesting yet is the fact that much of the Linux revolution has been driven and facilitated by IBM, Sun Microsystems and Dell, who are dramatically reducing their dependence on the old adversaries at Microsoft. Actively managing dependence between buyer and seller may be one way out of the trap.New business models that will be effective in avoiding the strategic account management trap will probably share some of the following characteristics:* Reducing critical dependencies and risks by developing alternative routes to market - consider the example of the automotive manufacturers developing direct channel strategies to take back control of the value chain and reduce dependencies on independent distributors.* Developing alternative product offerings to rebuild brand strength as a counter to the power of the largest.* Emphasizing the need for high returns to justify taking on high risk business, not the other way around.* Reducing strategic vulnerabilities created by excessive levels of dependence on a small number of customers or distributors.* Clarifying the difference between major accounts and key accounts and developing appropriate ways of managing these different types of relationship profitably.* Actively rejecting business from some sources because the customer is unattractive in terms of profitability and risk, even if the business on offer is large.* Managing customer accounts as a portfolio (see Figure 1) using criteria of attractiveness and prospective performance, not simply customer size.There are undoubtedly situations when KAM is an effective strategy to manage relationships with major buyers and to develop collaboration and partnership rather than adversarial transactions. However, what requires careful management consideration is under what conditions this is true, and whether these are truly the conditions they face. There is potential insight in evaluating the customer portfolio and its changing composition, and then considering not simply the quantity of business offered by the largest accounts, but also the quality of that business. The quality of business with major accounts includes the profitability of the business, but also the business risk involved, the impact of increased dependence on a small number of customers, and the opportunities given up. A balanced evaluation of this kind provides the basis for a more informed decision, but may also be the trigger for the search for strategic alternatives that may avoid the down-side of dependence on powerful key accounts. This balanced evaluation and search for new business models appears urgently needed in many organizations.\n", "role": "user" }, { "content": "Purpose: - To provide a critical perspective on the robustness of strategic account management (sometimes called key account management) strategies as an approach to managing relationships with large and very large customers.\nDesign/methodology/approach: - The paper is based on exploratory interviews and management workshop discussions and the observation of the operation of strategic account management approaches in practice, and is illustrated with cases drawn from secondary sources.\nFindings: - Suggests that SAM may amount to investment in strategic weakness that enhances dependencies and limits the scope for superior supplier performance; a customer portfolio analysis of all accounts identifies where the best prospects for long-term profit exist; many strategic account relationships are based on exaggerated estimates of customer relationship requirements and customer loyalty. We conclude that strategy analysts should be concerned with developing new business models that avoid the trap of dependence on powerful, major customers, rather than pursuing business strategies like SAM that reinforce dependencies.\nResearch limitations/implications: - There are a number of research opportunities in examining the long-term impact of formalizing strategic account management systems in supplier organizations.\nPractical implications: - We aim to provide managers and analysts with a different perspective on strategic account management strategy that considers the potential weaknesses and vulnerabilities created through the strategy, to be compared to the attractions of this strategy.\nOriginality/value: - Our goal is to add to understanding of strategic relationships between buyers and sellers. We do not believe that the downside to strategic account management strategy has received adequate recognition in existing treatments of the topic.\n", "role": "assistant" } ]
Champagne: the challenge of value co-creation through regional brands
[ "Case study", "France", "Competitive strategy", "Conceptual/theoretical", "Strategy", "Value creation", "Territorial branding", "Value co-creation", "Champagne", "Resource-based view" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Brands are traditionally seen as belonging to a firm. The traditional marketing view indicates that brands provide value to products and their producers (Kotler et al., 1994). However, the concentration on defining the product and on the producer has been criticized as being one sided, and lacking a consumer focus (Aaker, 1991). Aaker (1991) talks about a brand personality sought by the consumer; meanwhile Sheth et al. (1991) define a brand as an asset which mirrors the consumer's core values. De Chernatony (2009) defines brands as an intangible asset which adds value for the consumer, a perspective providing a balance to the earlier dominant producer-focus. An additional perspective also adds to the equation a shared vision, engaging all employees in an enterprise (de Chernatony and MacDonald, 2003). While a brand may lead to value creation for not only producers but also consumers and stakeholders, the impact is not direct and it forms part of multiple organizational and institutional processes.\nLiterature review: The existence of unique resources, such as brand reputation, can influence the creation of value (de Chernatony, 2009; Holt, 2002), as well as the orientation of the organization towards consumers, which is reflected in nurturing certain resources and organizational capabilities (Homburg and Pflesser, 2000). In addition to strategic internal resources, there may be an important role for external resources in defining the process of value creation. For example, social networks that are externally directed to detect the needs of customers can have potential for value creation (Smith, Collins and Clark, 2005) as can territorial brands, such as an AOC.\nResearch methodology: We employed a case study approach (Eisenhardt and Graebner, 2007; Siggelkow, 2007. A case study design was chosen for several reasons. First, the territory selected, Champagne, France, has a rich history in value co-creation and as a territorial brand (Charters, 2011). Second, Champagne firms have a long history in their markets so case study research can help to understand the path-dependent development processes related to their value proposition (Beverland, 2005). Third, understanding the conceptualization of value requires revealing the way that managers' perceptions motivate their strategic behavior; these can only be reached by close interaction between the researcher and participant through direct contact (Arnould and Wallendorf, 1994). Fourth, previous researchers have employed qualitative methods to identify resources and capabilities underpinning value creation processes (Tripsas and Gavetti, 2000; Hoopes et al., 2003).\nChampagne as a territorial brand: value creation behavior from different firms: The findings are summarized in Table III, and the next sections explore them in more detail.\nDiscussion: Territorial brands belong to the category of resources that are \"scale free\" (Levinthal and Wu, 2010) so that more than one firm can potentially use them at the same time. However, the economic value of such a resource varies across firms when it is combined in resource bundles allowing heterogeneity among the firms in terms of rents obtained. Barney (1991) argued that resources are valuable because they facilitate certain product market strategies. Therefore, we observe the value of a territorial brand is dependent of its use in a product market environment (Priem and Butler, 2001), where competition occurs in some dimensions of the territorial brand, e.g. grape sourcing, and cooperation in other dimensions such as adhering to rules of CIVC or investing in branding that denotes a premium positioning for champagne.\nConclusion: Our paper addresses one of the problems of the co-creation of value in a territorial brand as an intangible shared resource with all forms of possible complementarities and interactions with internal resources. In comparison with tangible elements, intangible elements such as territorial brands are less flexible and difficult to expand or contract; they can have multiple uses at the same time, serve simultaneously as inputs and outputs of firms' activities, and are not consumed when in use. Shared resources, such as territorial brands, are not valuable in their own right but they are only valuable when they serve to co-create value through strategies involving other internal resources. Consequently, the choice of resources to bundle with a shared resource can have strong implications on the value created by the firms. Conversely, the interaction between the tangible resources of firms and the territorial brand to create value has an impact on the shared resource.\n", "role": "user" }, { "content": "Purpose: The traditional view of the process of value creation suggests that it occurs inside the firm through its activities or resources. However, there are special cases where firms create value using external shared resources, e.g. a territorial brand. The purpose of this study is to demonstrate how the combination of both internal and external resources co-create value in wine regions.\nDesign/methodology/approach: An in-depth case study of nine firms covering different co-creation processes in Champagne, France. The selection of interviews was designed to cover the diversity of firms within the area with different market positioning. Most firms in the region have been selling champagne for more than 50 years, so they have established long-standing relationships with their markets.\nFindings: While there is only one value, Champagne, firms create many different values based on owners' perceptions with diverse effects on the process of value co-creation in the territorial brand. Some firms have strategies which could deteriorate the value of shared resource. This threat needs institutional changes with unknown consequences on the territorial brand.\nResearch limitations/implications: The research only involved one case study with a highly developed territorial brand system. There are multiple wine regions that have considered managing either implicitly or explicitly their shared strategic resources (e.g. a territorial brand). Consequently, the findings may not be applicable to all wine regions but it can provide a \"gold standard\" for regions and wineries that do not realize the impact that their value creation actions can have on the wine region.\nPractical implications: Collective management of shared strategic resources, such as a territorial brand, can be a powerful action to sustain competitive advantage rather than individual actions to develop individual brands. However, it can work only with an institutional organization managing the collective process.\nOriginality/value: The paper offers lessons from a comprehensive and well-known case study where resource bundles co-create value with a territorial brand.\n", "role": "assistant" } ]
High-performance work systems and job satisfaction: a multilevel model
[ "Perceived organizational support", "Engineers", "Intrinsic motivation", "Job satisfaction", "High-performance work system" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Over the past few decades, human resource management (HRM) literature has suggested that organizations use high-performance work systems (HPWS) to enhance their performance (e.g. Subramony, 2009). In general, HPWS is conceptualized as a system of interconnected HR practices that aim to attract, select, manage, and retain the highest quality human capital (Lepak et al., 2006).\nTheory and hypotheses: The mediating role of perceived organizational support\nMethodology: Sample\nResults: Table I presents means, standard deviations, Cronbach's a, and correlations among all the variables examined. However, correlations do not take into consideration the non-independent nature of the data; as a consequence, relationships and significance tests should be regarded with care until estimated with multilevel analyses (Bliese, 2000).\nDiscussion: The goal of our study was to examine the relationship between HPWS (at organizational level) and job satisfaction (at individual level) while focussing on the possible intervening role of POS (at individual level) and the moderating effect of employees' intrinsic motivation. We focussed on engineers, a professional group especially relevant for the organizations and the society (UNESCO, 2010), and by doing so we took into consideration the necessity for differentiating the workforce (e.g. Boxall, 2012).\nLimitations: This study's contributions should be interpreted while taking its limitations into consideration. Since engineers responded to all the measures at the same time through a single survey instrument, common method bias could be a potential problem. However, given that engineers' perceptions were an important aspect of study, self-report data were the appropriate option (e.g. Conway and Lance 2010; Kooij et al. 2010). Still, the suggestions of Podsakoff et al. (2003, 2012) - e.g. to separate predictor and criterion variables in the questionnaire, to employ reverse-coded items, and to assure engineers that there were no right or wrong answers - were drawn upon in order to try to avoid this problem. Additionally, HPWS was measured using the referent-shift composition model and the measurement model described above confirmed the discriminant validity of the constructs. Finally, our research found support for moderation, which reduces the likelihood of common method bias affecting that hypothesis test (Siemsen et al., 2010).\nFuture research: Replicas of this research with other outcome variables would be of value. In particular, future research could analyze the effect of HPWS, POS and intrinsic motivation on creativity, according to OST (Eisenberger and Stinglhamber, 2011) and previous research (Grant and Berry, 2011) which has found that intrinsic motivation impacts creativity. Moreover, based on OST and using a contingency perspective, future research could extend our research by incorporating moderators, such as organizational politics, in the relationship between HPWS and POS. Finally, in line with CRT, other individual differences, such as skills, could be additional moderators in the relationship between POS and job satisfaction. More specifically, as most jobs require social interaction skills, social effectiveness constructs, such as political skill, which has recently been found positively associated with job satisfaction (Munyon et al., 2013), could be considered in future studies.\nImplications for practice: The findings of our investigation have several managerial implications. First, HPWS implementation is suitable for organizations with engineers in their staff as a way to increase their job satisfaction.\nImplications for society: HPWS can also prove beneficial for the society. The findings of our study showed a positive relationship between HPWS and job satisfaction, and job satisfaction has been shown to have positive effects at individual, familiar, and societal levels. For example, it has been shown to have spillover effects on marital satisfaction, among others (e.g. Ilies et al., 2009).\nConclusion: This study provides evidence that HPWS is positively related to engineers' job satisfaction, and that POS fully mediates this relationship. Moreover, engineers' intrinsic motivation interacts with POS, so that when intrinsic motivation is high, POS has less of a positive effect on job satisfaction than when intrinsic motivation is low. The findings also support that the positive indirect effect of HPWS on job satisfaction through POS is stronger for engineers with low intrinsic motivation than those with high intrinsic motivation. These results contribute to the research on the HRM black-box and to a better understanding of how to enhance the well-being of engineers and society.\n", "role": "user" }, { "content": "Purpose: The purpose of this investigation is to analyze the role of perceived organizational support (POS) and intrinsic motivation in the relationship between high-performance work systems (HPWS) and job satisfaction.\nDesign/methodology/approach: The sample used in this study consists of 180 engineers from 25 companies. The model entails a cross-level moderated mediation process that was tested using multilevel structural equation modeling.\nFindings: This research shows that the effect of HPWS via POS on job satisfaction is stronger among engineers with low intrinsic motivation than engineers with high intrinsic motivation.\nPractical implications: Given the findings of this paper, organizations are advised to consider the importance of HPWS perceptions and intrinsic motivation to help strengthen engineer satisfaction.\nOriginality/value: This is the first study to provide evidence for the mediating effect of POS in the relationship between employees' shared perceptions of the HPWS implemented at their companies and their job satisfaction, and the moderating role of employee intrinsic motivation in the relationship between POS resulting from HPWS and job satisfaction.\n", "role": "assistant" } ]
Role of managerial traits in firm-level corruption: evidence from Vietnam
[ "Corruption", "Vietnam", "SMEs", "Managerial capital" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: There is a growing body of literature that recognises the role of bribery for country's economic development and firm performance. The level of bribery can be considered as a function of both demand and supply (Baughn et al., 2010) and could encompass public or private sector institutions (Cuervo-Cazurra, 2006). The supply side represents the bribe provider and the demand side of bribery denotes the bribe receiver (Beets, 2005; Sung, 2005). Demand side of the bribery has own conditions and incentives to demand informal payments from firms. For instance, publicofficials have discretionary power, which is monopoly power (Shleifer and Vishny, 1993) over something that firms prefer to have, e.g. permits or public contracts or tend to avoid, e.g. taxes and cumbersome regulations (Cuervo-Cazurra, 2016). The supply side of bribery, firms, also have incentives to provide informal payments to public officials (Jeong and Weiner, 2012). For example, firms may take advantage of bribery to lower taxes, to secure public procurements or to gain protection from government in countries with a very inefficient institutional setting (Meon and Weill, 2010; Mutlu, 2014).\n2. Managerial personality traits and corruption: Despite contradictory conclusions about presence and strength of relationship between personality traits and entrepreneurial behaviour in entrepreneurship literature, there are some studies showing an evidence regarding its presence. For example, according to Rauch and Frese (2007), the personality traits matched to entrepreneurship significantly associated with entrepreneurial behaviour (business creation and success) were, for example, innovativeness and need for autonomy. In addition, Caliendo et al. (2014) have investigated the link between personality traits and self-employment. Their findings show that some traits, such as openness to experience, extraversion and risk tolerance, impact entry, while agreeableness influences exit from self-employment.\n3. Context of Vietnam: Vietnam has been demonstrating a high level of economic growth so that the real GDP per capita grew at an annual rate of 5.1 per cent during the period of 2002-2014 (Benjamin et al., 2017). Nevertheless, according to Transparency International's Corruption Perception Index, the Vietnamese ranking was very low at 107 out of 180 countries with a score of 35 out of 100 points (where lower is more corrupt) in 2017. Empirical works provide evidence that there is a negative relationship between corruption and economic growth (e.g. Mauro, 1995; Johnson et al., 2011). Using data from over 10,000 Vietnamese firms, Bai et al. (2017) found that firm growth reduces bribes as a share of revenues. In reverse direction, Anh et al. (2016) have reported that decrease in corruption may lead to increase in economic growth of Vietnam. Their results suggest that channels matter for observing exact impact of corruption on corruption. Corruption positively affects economic growth if it influences via the human capital and domestic investment channel, however, it has negative influence if it affects through the voice and accountability channel. Consequently, corruption can be considered as a burden for Vietnam's economic development.\n4. Empirical setting and data: This study exploits the Micro, Small and Medium Enterprise Survey which was conducted via survey questionnaire with firm owners/managers of private manufacturing enterprises in Vietnam in 2015. Brandt et al. (2016) explain how this survey was implemented in Vietnam. According to Brandt et al. (2016), this survey was conducted in specific narrower areas in each province and/or city to ease implementation. Nevertheless, the enumerators had substantial prior experience, they had been provided an enumerator training before the implementation of the survey in the summer of 2015. This training was especially important as new sections such as managers' personality traits and behavioural characteristics-related questions were introduced to the questionnaire for the first time in 2015.\n5. Results: Table III presents average marginal effects of logistic and coefficients of OLS estimations. The author investigates how managerial personality traits are related to the probability of paying bribe (first column) and magnitude of the bribery (the second column). According to Table III, while innovativeness and risk attitudes are positively associated to the probability of paying bribe, locus of control is negatively related to the likelihood of paying bribe. All these three average marginal effects are statistically significant. More specifically, a more risk loving and innovative personality of the manager predicts approximately 2.7 and 3.1 per cent percentage point higher probability of paying bribe, respectively. It implies that managers who have more innovative and risk loving managerial traits are more likely to get involved in corrupt activities. This positive relationship is not unexpected because corruption has secretive and risky nature, managers who have risk loving and innovative traits tend to find innovative and brave ways to commit corrupt activities. Also, a more internal locus of control predicts around 5.3 per cent percentage points lower likelihood of paying informal payment. However, managers with high internal control may rely less on external forces or illegal ways of doing business for enhancing firm performances. This result is in line with the findings of Cherry and Fraedrich (2000) that internally oriented managers exhibit harsher judgement of bribery, are less likely to pay bribe and give more emphasis on lawful decision making.\n6. Conclusions: This paper offers several contributions to literature that focusses on the linkage between corruption and firm performance. First, this study provides a novel understanding of the role of managerial personality traits on firm-level corruption. Previous studies have identified the link between corruption and following manager-related aspects: demographic characteristics of managers (De Jong et al., 2012), attitudes of managers (Powpaka, 2002), cost-benefit considerations and moral judgement (Jaakson et al., 2019). However, the role of managerial personality traits in bribe involving interactions has not yet been explicitly proposed and empirically investigated. Second, this paper extends the previously studied relationship between managerial personality traits and firm performance (Sharma and Tarp, 2018; Opper et al., 2017), by taking corrupt managers' behaviour into account in poor business climate. Third, existing work presents evidence that managerial internal locus of control is related to their unwillingness to pay bribe (Cherry and Fraedrich, 2000), other managerial traits have yet to be included. This paper establishes a quantitative analysis of effect of three different managerial traits on corruption and firm performance.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is twofold: first, to examine the link between behavioural and personality traits of firm representatives and bribery in the case study of Vietnam; second, to study whether corruption is associated to firm performance through managers' personality traits.\nDesign/methodology/approach: This study uses novel data from micro, small and medium firms in Vietnam for investigating the relationship between behavioural and personality traits of representatives of firms and bribery. Stratified sampling method is employed to ensure an adequate number of firms in each province with different ownership types. Ordinary least squares and logistic estimation techniques are used in this analysis.\nFindings: This paper finds that traits of risk loving and innovativeness are positively associated to the likelihood of paying bribe whereas managers' internal locus of control is negatively related to the probability of paying bribe. This paper reveals that managers, who have risk loving characteristics and get engaged to bribe payments, are related to lower firm performances.\nResearch limitations/implications: Despite the fact that this paper provides robust and statistically significant empirical analysis, results from this paper are constrained with use of cross-sectional survey data, which has been conducted in 2015. Although this paper can provide strong correlations, it does not establish causal linkages and one should therefore be careful in interpreting the observed patterns as causal impacts.\nOriginality/value: The role of managerial personality traits in corruption interactions has not yet been explicitly proposed and empirically investigated. This paper attempts to fill this void by examining the relationship between managerial traits and corruption tendencies among SMEs in Vietnam.\n", "role": "assistant" } ]
Determinants of manufacturing productivity: pilot study on labor-intensive industries
[ "Manufacturing productivity", "R&D knowledge", "Skills", "Working environment", "Research and development", "Manufacturing industries", "Manufacturing systems", "Costs", "Labour", "Working conditions" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The labor-intensive manufacturing industries are generally defined as industries where labor costs are more important than capital costs. More specifically, labor-intensive means use of manpower in the production process with little support of technology. Labor-intensive companies generally have greater earning stability compared to capital-intensive ones due to their lower capital intensity. It is also evident that the remuneration package for labor in this type of industry is much less compared to the industries that are based on capital. What are the reasons behind this? As stated by Shahidul and Anwar (2008), lower working capacity of workforce, unfavorable working environment, poor R&D intensity and low process capability are the main causes of low productivity which lead the industry to provide a poor remuneration package.The concept of manufacturing productivity can be discussed in different ways but the most common aspects available in the literature are; Partial productivity, Total factor productivity and Labor productivity. Partial productivity relates the multiple inputs to net outputs. Whereas, Total factor productivity expresses the ratio of all outputs produced to all resources used. Labor productivity is determined by that person's potential to reach the highest level of his possible performance (Singh et al., 2000; Battisti et al., 2006). Indeed, labor productivity mainly measures the performance of labor (BLS, 1997).The skill of the workforce is a dominant input of a manufacturing system. Caves et al. (1982) observed that efficiency of transformation of inputs to outputs is largely dependent on the skill of the workforce. As stated by Cobb-Douglas, workforce skill is one of the main inputs of a production process. Furthermore, degree of skill is recognized as an effective driving force for enhancing manufacturing performance (Shahidul and Anwar, 2007). It is evident that the highly-productive manufacturing organizations tend to have a highly-educated and skilled workforce; more so than the least productive (Tamkin, 2005). Haskel and Hawkes (2003) found that higher qualifications support innovation and a more sophisticated production process and these are known to be a leading cause of higher productivity. It has been estimated that the equivalent of an extra year of schooling raised productivity between 4.9 and 8.5 per cent (Black and Lynch, 1996); this has been supported by other researchers (Mason and Wagner, 2002).A favorable working environment, such as working with a good team, having a good boss, and liking the physical surroundings in the workplace, is a contributory factor for motivating the workforce towards higher output. Indeed, job security, a sustainable compensation package, and the availability of food and drink at the workplace, are also considered to be principal indicators of a favorable working environment. The presence of all these factors in the workplace could gear up the morale of workers and contributes to increased manufacturing productivity. In a most recent study, Battisti and Iona (2009) had found a strong association between favorable working environment and productivity. Huang et al. (2003) stated that when all factors operate at the optimum level, the productivity is said to be at its highest level or in other words; any productivity gap has been reduced.Production schedule, materials movement planning and process layout design are the important components of a manufacturing system. The quality of these parameters is dependent on R&D activities of the manufacturing enterprise. Eventually R&D can improve the process capability of manufacturing systems and contribute to reducing non-value-added inputs of the manufacturing process. Czarnitzki and O'Byrnes (1999) pointed out that R&D intensity in manufacturing industries is playing a vital role in increasing process capability, which could lead to achieving higher outputs. Furthermore, they added, the accumulating R&D expenditure of a firm is often interpreted as its knowledge stock of the manufacturing process. Tassey (2009) found that an 8.6 percent positive change in R&D intensity contributed to an increase of about 38 per cent in output. Griliches (1980) stated that at an elasticity of 0.07, a 10 percent increase in R&D expenditure would bring 0.7 percent increase in output. However, the research found that the rate of return on R&D is significantly higher (10 to 27 percent). Sher and Yang (2005), in a study on Taiwanese integrated circuit industry, found that R&D capability is positively correlated with manufacturing performance.Bangladesh provides the example of successful use of low cost labor in export industries. The textile and garment sector has made a remarkable contribution to the economy of this country. This sector is expanding at a rate of about 20 percent per year (Siddiqi, 2005). In 2007-2008, garments account for 81 percent of the total exports ($12 billion) of that country (Export Process Bureau, 2009). Presently, this sector is using labor-intensive technologies and about 2.5 million workers are engaged in 4,500 registered companies. Regardless of labor productivity, the labor price per hour is fixed about USD 0.3. Indeed, it looks as though the Bangladesh garment industry is executing the policy of \"cut costs regardless of manufacturing productivity\". However, a low and unsustainable remuneration package has been creating crises in the Bangladesh garments manufacturing sector (Mohiuddin, 2005).Wage and productivity issues are researched well in developed and many developing countries but no study is found which attempts to address this issue from the Bangladesh perspective. This paper aims to address this issue in order to find the answer to: how can the remuneration package of labor be increased? Should industries go for a higher skill level? And does industry need to pay attention to improving the working environment? Or, does industry need increased R&D to improve process capability? Furthermore, are these factors associated with higher productivity? Under this scope of study, this research will examine and estimate the impact of these factors on manufacturing productivity of the garments manufacturing industries of Bangladesh. The structure of this paper is such that the description of research objectives is placed in section 2, section 3 is dedicated to describing the research methodology, analysis and research findings are placed in section 4 and finally, section 5 is reserved for the conclusion. References and appendices are placed at the end of this paper.1.1 Productivity and theory of manufacturing operations of labor-intensive industry\n2. Objectives of the research: This study attempts to measure the impact of three vital inputs on manufacturing productivity. The inputs are; the skills of the labor force engaged in the manufacturing process, favorable working environment in the manufacturing system, and new knowledge of the manufacturing process which is generated by R&D activities. The specific objectives of this research are to measure the influence of the following inputs on manufacturing productivity;* degree of skills of the workforce;* favorable working environment; and* R&D knowledge of the manufacturing process\n3. Research methodology: 3.1 Sampling criteria\n4. Analysis and result: The input-output model (see equation (1) and Figure 1) is used to establish the relationship among dependent and independent variables. To obtain findings and in-depth analysis of the different variables, the statistical technique of correlation and regression has been used. The summary of the statistical analysis for all research variables are shown in Tables III, IV and V.4.1 Research findings\n5. Conclusions: The garment manufacturers of Bangladesh must have budgets to provide favorable working environments and to conduct R&D to improve the manufacturing process by increasing productivity. The findings of this study could be the basis of formulating favorable working environment in terms of supplying drinking water, tea, lunch and first aid. Additionally, there must be a provision of regular payment of salary and wages, bonuses, and paid sick and casual leave. However, moderate degrees of \"bossing\" and empowering would also be contributory factors to motivate the workforce.The findings of this study could be the guideline for labor-intensive industries, especially Bangladesh garment manufacturing, to design R&D for the manufacturing process to reduce non-value-added inputs and to improve productivity which would eventually increase revenue. Thus, the additional amount of revenue would support the Bangladesh garment-manufacturing sector to design higher remuneration packages for the workforce.On the basis of this research it can be concluded that expenditure to create favorable working environment and to execute R&D activities to generate new knowledge on manufacturing process is essentially important for labor-intensive industries to utilize the skill potential of the workforce. The return on this investment would greatly contribute to increased outputs and productivity.\n", "role": "user" }, { "content": "Purpose: - The purpose of this study is to address the impact of three issues: degree of skills, favorable working environment and R&D on manufacturing productivity of labor-intensive industries.\nDesign/methodology/approach: - Convenience sampling method has been used to conduct this study. Three categories of labor-intensive manufacturing industries (A, B and C) have been chosen to perform this research. Industrial category A represents the manufacturing operations which are based on skill of labor. Category B is a group of industries which provides a favorable working environment to utilize the potential of skill in manufacturing process. Category C is a specialized group of industries and its manufacturing process is dependent on R&D. Input-output data for manufacturing operations of the sample industries have been analyzed by using standard statistical techniques to establish the relationship between dependent and independent variables.\nFindings: - The degree of skill and productivity is positively correlated, but not strongly (r<0.5). The study found that productivity is positively (r>0.5) associated with favorable working environment. However, a significantly positive correlation (r>0.7) is found between R&D expenditure and productivity. The study concludes that a higher degree of skills, favorable working environment and R&D are important inputs to a labor-intensive manufacturing process, which is positively associated with productivity.\nResearch limitations/implications: - Manufacturing productivity is an important parameter of industrial growth, and this paper addresses this issue. The current work addresses the garments sector, i.e. a part of the labor-intensive industries. Though this work is focused on only one part of this sector; the findings of this study have significant policy implications. The results would be useful for manufacturing business executives and policy makers.\nOriginality/value: - This study addresses the important issue of productivity of labor-intensive manufacturing industries, and generates quantitative evidence of the impact of degree of skills, favorable working environment and R&D on productivity. The generated information enriches the present knowledge stock of manufacturing systems. The findings could be the basis for further academic research on manufacturing systems.\n", "role": "assistant" } ]
Assessment of disaster risk management in Mexico
[ "Disasters", "Emergency response", "Manmade disaster", "Climate", "Natural hazard", "Weather", "Financial instruments", "Insurance", "Economic planning and development", "Risk transfer", "Loss sharing" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Mexico is a country highly exposed to a wide variety of natural hazards. The southern Pacific coast of the country belongs to the so-called Ring of Fire of tectonic activity. Tropical storms and hurricanes are a constant hazard on both the Pacific and Atlantic coasts. Droughts are increasingly affecting the whole country, even with rising presence in typically wet regions, as well as a desertification process is hitting vast regions of the north. As such, natural hazards in Mexico have increased their economic impact over the past four decades (see Figure 1).\nDefinition of terms and summary of public instruments for disaster risk management: The Mexican government's intervention in the field of disasters goes beyond civil protection, as it includes a complete variety of public instruments for disaster risk management. A public instrument is a legal term that is used for any formally executed written document that can be attributed to the public sector that formally expresses a legally enforceable act, process or contractual duty, obligation, or right (Grifis, 2010). Risk management is the set of pre- and post-disaster actions toward facing negative consequences of hazards (Freeman and Mechler, 2001). Pre-disaster actions include risk transfer (i.e. insurance), risk assessment (monitoring, mapping, etc.), mitigation works (i.e. infrastructure strengthening, etc.), and preparedness. Post-disaster actions include emergency response (aid, clean-up, etc.) and loss sharing for rehabilitation and reconstruction (i.e. private-public and national-local resources). From its part, civil protection is a concept embracing both pre- and post-disaster. Derived from the civil defense terminology, civil protection is the network of awareness and emergency response for disaster and crisis state avoidance, initially based on various technical interventions usually known as preparedness. Preparedness is the management capability before a disaster occurs to provide an effective and efficient (prompt and due) reaction to face a disaster (Freeman et al., 2002).\nSection I: loss sharing: A. FONDEN\nSection II: risk transfer: Risk transfer from natural hazards in Mexico is concentrated almost entirely in the agricultural insurance market. In total, 64 percent of agricultural insurance premiums are subsidized by the government (CONEVAL, 2011) and national agricultural area covered by any insurance scheme has grown significantly over the last ten years. The main scheme of agricultural insurance subsidy is managed by AGROASEMEX and the CADENA program, which are set forth below to further take stock of its achievements and challenges.\nSection III: risk reduction: The nature of risk reduction instruments is much more complex. Unlike risk sharing and risk transfer, risk reduction instruments rely in edge cutting investments. Risk reduction investments include both construction of new infrastructure or repair/rebuild existing infrastructure. Now, let us briefly describe the corresponding public instrument for disaster risk reduction in Mexico, FOPREDEN.\nSection IV: conclusions: As seen from the above discussion, if the herein described instruments might achieve to work properly, they might produce multiplying effects. The feasibility of investing in disaster mitigation works, insurance, contingent funds, or any other instrument, must depend on a comprehensive project development (rural-urban, public-private entities, national-province and prevention-reaction). It seems to have carried on still insufficient investments to mitigate disaster risk.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to provide a set of policy suggestions for integrating risk management and increasing risk reduction measures and planning.\nDesign/methodology/approach: It bases on a brief description of the disaster risk management programs in Mexico, a review of their recent available assessments as well as it makes a brief economic analysis of their performance to conclude with some policy suggestions.\nFindings: Despite its novel design, the still low penetration of governmental instruments for disaster risk reduction in Mexico has led to high society's reliance on post-disaster measures. It has encouraged moral risk among potential victims. Even when crop insurance has increased coverage over the past decade, disaster prevention instruments are still underused. Accessing to prevention funding requires project proposals from national and sub-national governments based on concrete risk assessments. However, the prevailing lack of institutional capacity to elaborate proposals from sub-national governments seems to explain it at a large extent. The paper provides a set of suggestions on this regard.\nOriginality/value: There is no recent integral assessment of disaster risk in Mexico. Although there is a recent OECD review of the National Civil Protection System, its analysis leaves out the catastrophic agricultural insurance, critical part of comprehensive risk management of a country. On the other hand, there are recent evaluations of programs public for disaster risk management, but these consist of only individual program evaluations, lacking integrative and comparative analysis. Thus, this paper provides a comprehensive view of government risk management and concludes with a series of policy recommendations.\n", "role": "assistant" } ]
Managing technology innovation paradoxes through multi-level ambidexterity capabilities
[ "Technological change", "Technology innovation paradoxes", "Ambidexterity capabilities", "Integration mechanisms" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In pursuing new core technology, incumbent firms face the ambidextrous challenges of \"exploiting\" existing capabilities in order to support advanced \"exploratory\" technology (Taylor and Helfat, 2009). The two types of innovations involved in technological changes, exploitative and exploratory innovation, cause competition for limited resources to occur within an organization, thereby creating tension (Tushman and O'Reilly, 1996; Raisch and Birkinshaw, 2008; Brown and Eisenhardt, 1997; Ge et al., 2016). In addition, the organizational inertia that drives companies to engage in exploitative innovation might evade exploratory innovation because of uncertainties and risks related to exploratory innovation, leading to difficulties adapting to dramatic technological change (Groysberg and Lee, 2009; Taylor and Helfat, 2009).Firms with ambidexterity capabilities might manage such technology innovation paradoxes (Chen et al., 2018; O'Reilly and Tushman, 2007). Ambidexterity capability enables firms to engage in both exploratory and exploitative forms of innovation (Tushman and O'Reilly, 1996). Radical innovation gained through exploratory innovation enables firms to quickly respond to technological changes within the industry (Benner and Tushman, 2003; He and Wong, 2004). Successful firms are usually able to address the contradiction between the two types of innovation, which helps them adapt to technological changes (Du et al., 2013). The logic behind this is that the requirements of the market environment for firms are, to a certain extent, often in conflict. In the context of technological change, such conflict is more prominent (Brown and Eisenhardt, 1997).Existing studies have proposed three types of ambidexterity capability, namely, structural ambidexterity capability, contextual ambidexterity capability and leadership-based ambidexterity capability (Khanagha et al., 2014; Guttel et al., 2015; Du et al., 2013). While these studies indicate that these three types of capabilities function at different organizational levels, they generally discuss these capabilities separately, thereby lacking a comprehensive view of how these ambidexterity capabilities jointly address innovation paradoxes. Extant research indicates that innovation paradoxes are interwoven across organizational levels and may trigger a number of different forms of tension throughout organizations (March, 1991; Smith and Tushman, 2005). Hence, scholars have called for ambidexterity research taking a holistic perspective to examine ambidexterity capabilities at multiple organizational levels, so as to reflect this complexity (Raisch and Birkinshaw, 2008; Andriopoulos and Lewis, 2009).The context of technology innovation and technological change provides us with an opportunity to fill this research gap. In addition, in the research stream of technology innovation and technological change, little attention has been paid to the manifestation of technology innovation paradoxes in technological change (Shao et al., 2017; Smith and Lewis, 2011). Hence, in this study, we address the following two research questions: (1) How do technology innovation paradoxes manifest themselves in technological changes? (2) How do incumbent firms manage technology innovation paradoxes through multi-level organizational ambidexterity? To do so, we examine technology innovation in cloud computing, which has taken shape and brought about changes to the information technology industry. Specifically, we examine how a traditional software company, China Standard Software Co., Ltd. (CS2C), successfully navigated the technology transition to cloud computing from its existing operating systems (OS) business by managing innovation paradoxes through multi-level ambidexterity capabilities. Our findings show that technology innovation paradoxes manifest themselves as contradictory relationships and mutual support relationships between exploitative and exploratory innovation. In addition, we identify three integration mechanisms as key to the multi-level organizational ambidexterity capabilities in managing technology innovation paradoxes in technological changes.The rest of the paper is organized as follows. We first provide the theoretical background. Following that, we describe our research methods. Then, we describe the case, followed by the case analysis. The paper ends with implications, limitations and directions for future research.\n2. Theoretical background: 2.1 Technology innovation and technological change\n3. Research methods: This study examines a single exploratory case and conducts an in-depth analysis of how technology innovation paradoxes manifest themselves in technological changes and how incumbent firms manage technology innovation paradoxes through multi-level organizational ambidexterity. Case studies often represent empirical descriptions of specific manifestations of a phenomenon based on multiple data sources (Yin, 2013), and are often valid in the study of complex processes within an organization (Eisenhardt and Graebner, 2007). Aiming to address the two \"how\" research questions, it is suitable to apply a case study method to studies such as ours, which focuses on dealing with a dynamic process under a single case context (Pan and Tan, 2011). Next, we describe our research context, followed by the data collection and analysis process. 3.1 Research context\n4. Case description: Affiliated with China National Software and Services Co., Ltd, CS2C was established in 2003. The company specializes in the development of basic software, such as OS and office software products. As the leading company in China's software industry, it provides software products widely used in key national economic fields, such as the government, national defense, finance, taxation, auditing and civil aviation. It has ranked number one in market shares among all Linux OS in China for many years and accounts for more than 25% of the Chinese Linux OS market. With the rapid development of China's cloud computing market, the company has transformed from a provider of OS products and services to a provider of cloud computing software and services, and has successfully promoted cloud computing technology innovation and change.When the concept of cloud computing emerged in 2008, top managers noticed the related technology trend and decided to pursue cloud computing technology. Based on the conditions of the company and the general understanding of cloud computing by software firms at that time, the possible options for pursuing the cloud computing technology were a public cloud, a software-as-a-service cloud, and cloud computing software products. The company decided to develop cloud computing platform software for several reasons. First, the company's OS products had a market share of above 25%, while its office software products had a very small market share. Hence, a high level of capital investment was needed to pursue the idea of a public cloud, as several large firms already did. Second, it would also be difficult to pursue a software-as-a-service cloud, which was related to office software, considering the small market share of its products. The company's executives estimated that the construction of basic information system platforms in different sectors in the future might be based on cloud computing technologies, rather than on existing OS technologies. 4.1 Phase 1: Hindered development of exploratory innovation\n5. Case analysis: As shown in Figure 1, in CS2C, technology innovation paradoxes in technological changes manifested themselves in contradictory and mutual support relationships between exploratory and exploitative forms of innovation. In addition, the three types of organizational ambidexterity capabilities drove CS2C to implement certain integration mechanisms at different organizational levels. Those integration mechanisms helped CS2C to resolve exploratory and exploitative innovation paradoxes during the technological change toward cloud computing. A detailed discussion of these issues follows. 5.1 Exploratory innovation and exploitative innovation paradoxes during technological change toward cloud computing\n6. Implications: This study aims to improve the existing understanding of how technology innovation paradoxes manifest themselves in technological changes and how incumbent firms manage technology innovation paradoxes through multi-level organizational ambidexterity. Our case analysis shows that technology innovation paradoxes manifest themselves as contradictory relationships and mutual support relationships between exploitative and exploratory forms of innovation. Multi-level organizational ambidexterity capabilities were found to drive the implementation of three integration mechanisms: incentive measures, information sharing channels and organizational learning. These mechanisms have further enabled CS2C to manage technology innovation paradoxes in technological changes toward cloud computing technology. 6.1 Theoretical implications\n7. Limitations and directions for future research: This paper, while insightful, has limitations, several of which provide directions for future research. Case studies often represent empirical descriptions of specific manifestations of a phenomenon, based on multiple data sources (Yin, 2013) and are often valid for studying processes within an organization (Eisenhardt and Graebner, 2007). While this study is based on a single representative and typical case (Eisenhardt et al., 2016), it still faces doubts about its generality and external validity (Walsham, 2006). Contextual factors, such as industry, country, ownership type and market competition, may limit the generalizability of our findings. For example, on the one hand, as a traditional OS provider, CS2C had no experience in the cloud computing field. Hence, the technology innovation paradoxes it faced might have been more prominent than in other contexts at the beginning. On the other hand, CS2C was affiliated with China National Software and Services Co., Ltd, whose shareholders at that time were eager to enter the cloud computing market to gain competitive advantages. Hence, CS2C gained significant support from key stakeholders while developing its ambidexterity capabilities.We recognize that statistical generalization is impossible from a single case. We nevertheless contend that our research is generally applicable beyond its singular context, as the derived model is not only grounded in the empirical reality of our case data but is also corroborated by the extant literature. In terms of future research, studies could statistically validate the propositions of this research, so that our findings can be better refined. This study examines an incumbent firm with abundant market channels experience, and an established brand reputation in the industry. How will newly founded firms or firms with fewer resources seize the opportunities presented by new core technology and achieve sustainable development? What are the processes and mechanisms involved for them? Further research could collect and analyze other cases, supplement and improve the conclusions drawn from this study, and systematically unfold mechanisms in regard to managing technology paradoxes during technological change.\n", "role": "user" }, { "content": "Purpose: In this study, the authors aim to address the following two research questions: (1) How do technology innovation paradoxes manifest themselves in technological changes? (2) How do incumbent firms manage technology innovation paradoxes through multi-level organizational ambidexterity? To do so, the authors examine technology innovation in cloud computing, which has taken shape and brought about changes to the information technology industry. Specifically, the authors examine how a traditional software company, China Standard Software Co., Ltd. (CS2C), successfully navigated the technological transition to cloud computing from its existing operating systems business by managing innovation paradoxes through multi-level ambidexterity capabilities.\nDesign/methodology/approach: This study examines a single exploratory case and conducts an in-depth analysis of how technology innovation paradoxes manifest themselves in technological changes and how incumbent firms manage technology innovation paradoxes through multi-level organizational ambidexterity. The data collection and analysis occurred simultaneously through three phases. In Phase 1, one of the authors who had worked at CS2C for many years enabled the authors to obtain access to the company. The data analysis during this phase provided the authors with the history and current situation of CS2C, enabling them to understand the external circumstances, such as particular historical period, and internal conditions, such as cultural and technological changes, that would be relevant throughout the course of their study. It also helped the authors identify organizational ambidexterity capability as the guiding theoretical concept for their research. In Phase 2, the authors engaged in site visits and conducted detailed interviews with employees working at CS2C. In Phase 3, most of the data analysis was conducted. When the interview data were not sufficient to support the theoretical analysis, additional data were collected via phone calls and emails, to assure data-theory-model alignment.\nFindings: The authors' findings show that technology innovation paradoxes manifest themselves as contradictory relationships and mutual support relationships between exploitative and exploratory innovation. In addition, the authors identify three integration mechanisms as key to multi-level organizational ambidexterity capabilities in managing technology innovation paradoxes in technological changes.\nOriginality/value: Three important theoretical implications can be drawn from our case analysis. First, this research contributes to the knowledge of innovation paradoxes during technological changes. Second, this research provides a model of multi-level organizational ambidexterity capability in technological changes. Third, this research proposes three integration mechanisms driven by three types of ambidexterity capability at different organizational levels.\n", "role": "assistant" } ]
Assessing the distributional impact of community forest income: Evidence from south-western Nigeria
[ "Income distribution", "Environmental economics", "Sustainable development", "Welfare economics" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Forest resources are a key component of the natural resource base of any community and they play a fundamental role in the socio-economic well-being of the people. This is particularly so in Africa where most of the countries including Nigeria have large rural populations that subsist on the forest resources (Inoni, 2009). Over two-thirds of Africa's 600 million people obtain a major proportion of their subsistence and cash income from a large and diverse set of forest products and forest-related activities (Arnold and Townson, 1998; Kaimowitz, 2003; CIFOR, 2005). The forests offer numerous benefits to adjacent communities and society at large (Wollenberg and Ingles, 1998; Oksanen et al., 2003; Lawes et al., 2004). These include consumptive resources, spiritual and aesthetic needs, employment and ecological services such as carbon sequestration and water regulation. However, in many situations access to such benefits is neither uniform nor equitable between nor within communities. The majority of forests, by their very nature, are located within rural and frequently remote areas which are underdeveloped in terms of infrastructure, government services, markets and jobs. Thus, communities living in and adjacent to forests are characterized by seemingly high levels of poverty and limited livelihood opportunities (Wunder, 2001; Sunderlin et al., 2005). This poses both developmental and environmental challenge for State agencies the world over. It is a developmental challenge because forest dependent peoples are amongst the most marginalized and neglected communities. It is an environmental challenge because the spectre of resource depletion always looms as people continue to use forest resources and through land transformation to farming, mining and urban uses.\nResearch methodology: The study was carried out in Ogun State, south-western Nigeria. The state was carved out of the old western state in 1976. The estimated human population is 3.73 million, occupying a land area of approximately 16,400 square kilometers (sq.km) (National Bureau of Statistics, 2008). It is largely agrarian in nature with a large rural population who depend on subsistence agriculture and forest resources as major source of livelihood (Agbonlahor, 2010). The state has a total of nine forest reserves which were inherited from the former western state. The reserves altogether cover an area of 2,732.62 sq.km, which is about 16 per cent of the total land mass of Ogun State and spread across the three Senatorial Districts of the State (Ogun State). Data were collected on the socio-economic characteristics of the households, the various economic activities engage in by the people and income associated. The households were selected using a combination of multistage and random sampling technique. A total of 160 households were selected, however, data from 140 households were found useful for the analysis. The analytical techniques employed include descriptive statistics and Gini decomposition technique. To estimate the association between forest income and rural income inequality, the study adopted the Gini decomposition technique proposed by Yao (1999) as adopted by Fisher (2004). The Gini coefficient (G) for aggregate income is given by: (Equation 1) where I is the total number of income sources (number); W\nResults and discussion: Socio-economic characteristics of respondents\nConclusion and policy implications: This study reveals and quantifies the important role community forests play on the welfare of rural households. The data indicate high levels of dependence on the community forests for income; about 90 per cent of the sampled households derive on average about 38 per cent of their total income from forests. The data also suggest that access to forests help reduce income inequality across households. The main policy implication of the findings is therefore that, the forests can have an important role in mitigating poverty and income inequality in the study area. In terms of income redistribution, the results suggest that income inequality can be reduced through policies that would assist the poor who mostly depend on forest extraction so as to come out of poverty. Towards this end, sustainable access to forest resources and increased public spending on the non-forest dependent sector of agriculture (farming) may be desirable. This is because income from agriculture also helped reduce income inequality in the study area.\n", "role": "user" }, { "content": "Purpose: Emerging evidence revealed a high rate of dependence of marginal people on forest resources in developing countries for both subsistence use and cash income. The purpose of this paper is to examine the rural livelihoods welfare dimensions of community forest income in south-western Nigeria.\nDesign/methodology/approach: Forest activities and income pattern data were collected from 160 rural households' selected using multistage sampling approach. Descriptive statistics and Gini decomposition technique were used to analyse the data.\nFindings: The result shows that forest income accounts for about 38.2 per cent of total household income and was the first ranked source of income in the study area. The Gini decomposition analysis showed that access to forest income is income inequality reducing in the study area. The study findings suggests that household welfare in rural Nigeria could be improved through policies and programmes that can stimulate sustainable access to forest resources and assist households to earn income from alternative sources such as agriculture.\nOriginality/value: The result of the study helped provide information on the uses and benefits of community forests as it affects the well-being of rural people. Also, it provides the benchmark for policy makers, government agencies and NGO's involved in rural livelihood outcome of forest communities.\n", "role": "assistant" } ]
Variance analysis
[ "Poetry", "Loss" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nVariance analysis: For Tim, i.m. FredThe hours go slow and yawn and scratch themselvesand we are naming parts of balance sheets,tight in a windowless room, totting the cost,while the first day of spring unfurls outside;and you are motorbiking scented country lanesabsorbing this year's deficit - its countless loss.Fluorescent tubes hum songs of swarming bees,no breath of blossom in this air-recycled roomno sun with surplus warmth, awakening assetsno daffodils turning heads to drink the light.While you are chopping logs: axe-gleam, sweat,pine-smell, chips flying, piling contingency.I'm reckoning the fast depreciation of my day.Doves coo on chimneys, primed for production,cat stretches in a patch of golden afternoon,ponds thrash till frogspawn bubbles up its yearly yield;and you are in the garden, planting onion setsthumbing each into the damp soil, balancing life.\n", "role": "user" }, { "content": "Purpose: - This poem aims to use the language of accounting and auditing to examine how we deal with the loss of a loved one. It also aims to juxtapose the vocabulary of a budgeting seminar with the world outside, and the way the \"you\" of the poem deals with his grief on the first anniversary of his father's death.\nDesign/methodology/approach: - This is a free verse poem that uses longish lines of long, heavily punctuated sentences to reflect the slow passage of time in a budgeting seminar, and the waste of the first day of spring, particularly when we consider mortality. The interior description and language counterbalances the activity and brightness of the world outside. The poem deliberately references Naming of Parts by Henry Reed, on the futility of army exercises on a beautiful spring day. It is designed to end with hopefulness.\nFindings: - The poem uses a newly encountered vocabulary to connect frustration at being cooped up and a partner's grieving. Auditing terminology combines with concrete images of the spring world to convey the movement, growth and vitality outside the room.\nResearch limitations/implications: - It is possible to mould the language of accountancy into a poem about spring and about grief.\nOriginality/value: - The poem uses accounting jargon, which seems at first disconnected with the subject matter, to emphasise and underscore the timeless poetic theme of mortality.\n", "role": "assistant" } ]
A conceptual framework for information technology governance effectiveness in private organizations
[ "Information technology", "Corporate governance", "Information technology governance", "Organizational performance", "Balanced Scorecard" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1 Introduction: Information technology (IT) is an integral part and fundamental to support, sustain, and grow a business. Knowing this, corporations make huge investments in IT. Gartner (2010) reports that despite the current economic slowdown, worldwide IT spending reached $3.4 trillion in 2010, a 4.6 percent increase from 2009; yet a large portion of IT investment does not guarantee high returns. The Standish Group (2006) reports that approximately 67 percent of IT projects failed or were challenged to justify the investment. As corporations invest highly in and rely heavily on IT, they expose themselves to high risks. Thus, organizations need to review continuously and protect information assets from disaster. The September 11 terrorist attack on the World Trade Center in New York City and corporate financial misdeeds like Worldcom and Enron are among notable disasters.Assuring sound business systems and adequate internal controls, regulatory frameworks are instituted. Sarbanes-Oxley Act (SOX) 2002 was enacted to enhance corporate governance, foster organization responsibilities, strengthen internal controls, and increase accountabilities (Damianides, 2004). Although SOX requires all financial reporting needs to be assessed, it does not ensure that businesses are secure fully. The Basel II Accord was enacted to prevent major bank failures; banks complying with the Basel II Accord pledge to safeguard themselves against financial and operational risks, including technology risks.In the last decade, IT governance captured the attention of both practitioners and academics and was cited as a subset of corporate governance (Korac-Kakabadse and Kakabadse, 2001; Lainhart, 2000). Rising interest in IT governance is attributed to SOX 2002, the Basel II Accord, and the acknowledgment that IT investments must be protected from severe losses. Lainhart (2000) suggests that concern with IT governance focuses primarily on policies and procedures that define how organizations direct and control the use of technology and protect its information from IT-related risks.IT governance involves discharging roles and responsibilities in assuring sound IT practices in organizations. Van Grembergen (2000) refers to IT governance as:[...] the ability of the organizations to achieve specified goal(s) or organizational capacity, exercised by the board, executive management and IT management to control the formulation and implementation of IT strategy and to ensure the fusion of business and IT.As a professional body established in 1998, the IT Governance Institute (ITGI, 2003) identifies IT governance as:[...] the responsibility of the board of directors and executive management, where it is an integral part of enterprise governance and consists of the leadership and organizational structures and processes that ensure the organization's IT sustains and extends the organization's strategy and objectives.Despite the growing importance of IT governance in corporations, the focuses of most extant IT governance research have been on:* IT governance;* the link between IT governance and its determinants; and* the link between IT governance and its impact.Each focus appears separate from one another, but gaps exist for research that links IT governance effectiveness to both its determinants and impacts. This paper investigates IT governance literature to conceptualize a framework for IT governance effectiveness that integrates its determinants and impacts in private organizations.The paper has six sections. This section introduces the background of the research. In Section 2, we discuss research design. A discussion on IT governance is in Section 3. In Section 4, we discuss organizational performance as an impact of IT governance effectiveness; we also offer a proposition that links IT governance effectiveness and organizational performance. In Section 5, we provide a discussion of determinants of IT governance effectiveness and their link to IT governance; we suggest a set of propositions and recommend a conceptual research framework for further exploration. The last section concludes the paper.\n2 Research design: We use a systematic literature search to conceptualize an IT governance effectiveness framework adapted from Bryman's (2008) approach. In the first step, we define the research question as a means to guide the purpose of the review. The second step establishes the criteria to guide the selection of studies through identification of appropriate databases for searches of the research context and use of relevant keywords. The third step synthesizes the literature review with the development of a conceptual framework. The research question is:RQ1. What are the determinants and impact of IT governance effectiveness in private organizations?In this section, we indicate the bibliographic databases used in our searches, inclusion, and exclusion of research article types, particularly evident in health research (Alexanderson and Norlund, 2004; Robson and Bigelow, 2010). In information systems and management research - specifically those that contribute to the development of conceptual frameworks - the extent of such descriptions is not apparent (Masrek et al., 2007; Mellat-Parast and Digman, 2007; Patelis et al., 2005; Svensson, 2006; Teh et al., 2008). Using the following online databases, the period for the literature search was December 2007-May 2010:* Scholar Google.* ACM.* Ebscohost.* Emerald.* IEEE.* Springer.* Gale.* Proquest.* Science Direct.Since IT governance is traditionally a board room agenda, we suggest that other streams of research that are of relevance to IT governance include strategic information systems planning (SISP), strategic alignment maturity (SAM), information systems security, business-IT alignment, and International Organization for Standardization (ISO) in information systems. Thus, we used the following keywords, titles, and abstract in our literature search: IT governance, SISP, SAM, information systems security, business-IT alignment, and ISO information systems. Evaluating the research articles to answer our research question, we used journal articles, conference papers, and dissertations published between 2000 and 2010. We also viewed web sites referred to in the literature to gain further insights into IT governance. We excluded book reviews, editorials, prefaces, news, workshops, panels, and poster sessions from the search. In subsequent sections, we discuss our findings from the literature review and synthesize development of a conceptual framework for IT governance effectiveness.\n3 IT governance: IT governance is cited as a strategic issue that requires commitment at a strategic level. According to Simonsson and Johnson (2006), IT governance relates to IT decision-making authority, organization capabilities, structures, processes, and relational mechanisms that result in an alignment between business and IT. Based on a literature review, there is no consensus on a standard definition of IT governance; the literature does suggest various perspectives of IT governance (Table I).Based on these perspectives, IT governance spans involvement of personnel at various levels of an enterprise (i.e. from strategic to operational levels). Each level of the organizational hierarchy and IT function has different IT governance needs. Hence, IT governance encapsulates governing structure, leadership, processes, and relational mechanisms to address performance while providing assurances that information is protected from IT-related risks.Supporting IT governance implementation and monitoring performance, various associations and regulatory bodies introduced IT governance frameworks. Each framework addresses specific objectives including IT control structure, protection of IT investment, security and control of IT, protection of information from losses, assuring information integrity, quality IT services, and quality software (Table II). More recently, the literature suggests that IT governance dimensions include structure, process, and relational mechanisms (DeHaes and Van Grembergen, 2004, 2005, 2006, 2008a, b, 2009, 2010; Grant et al., 2007).Structure consists of organizational units, roles, and responsibilities for making IT decisions between management and IT committee cooperation (Weill and Ross, 2004). The governance structure details how many committee levels or layers there are and outlines each of their primary roles (Grant et al., 2007). Structure describes the way IT function is organized and the organizational structure of the IT division in the organisation (Rau, 2004).Process refers to strategic decision-making, SISP, and monitoring key performance indicators including service level agreements, IT demand management, IT portfolio management, and chargeback systems (Symons, 2005). Besides IT balanced scorecards, project tracking systems, and IT chargeback systems are necessary for effective performance measurement mechanisms (Syaiful and Green, 2007) and enable the IT strategy committee to detect and correct any deviations and alter strategy when required (ITGI, 2003).Relational mechanisms manage the soft side of IT governance that covers announcements, advocates, channels, and educational efforts that disseminate IT governance principles, policies, and outcomes of IT decision-making processes (DeHaes and Van Grembergen, 2005; Weill and Ross, 2004). Peterson (2000) refers to relational mechanisms as \"[...] the active participation of and collaborative relationships among, corporate leaders, IT management and business management\". We adopt the structure, process, and relational elements of IT governance.\n4 Impact of IT governance effectiveness: Many extant studies attempt to measure the IT performance by quantifying the returns of each IT investment decision; use of economic measures like return on investment (ROI), net present value (NPV), etc. at the organizational level are the most common forms (Ford, 1994; Semich, 1994; Quinn and Baily, 1994). Others resort to organizational performance as the outcome of IT performance that covers increased market share, achieved competitive advantage, improved internal efficiency of organizations' operation, and increased annual sales revenue (Teo and King, 1996). Pervan and Maimbo (2005) argue that financial performance might be an inaccurate measure when investigating IT performance; they suggest perceived benefits arising from organizational performance like operations, product delivery, customer service, and staff. While financial measures such as profit margin, operating margin, and ROI are typical, they are historical in perspective. Recognizing the need to complement the historical perspective and balance against perceptions alone, Kaplan and Norton (1996) developed the balanced scorecard approach. The balanced scorecard consists of four perspectives that business executives need to identify: financial perspective, customer, internal business processes, and learning and growth. Financial perspective relates to strategy, implementation, and execution of bottom-line improvements (Kaplan and Norton, 1996). Operating income, return-on-capital, and economic value added are examples of measures used commonly. The customer perspective is concerned with the customer and market segments. Examples of measures that relate to this perspective include customer retention, new customer acquisition, customer profitability, and market share (Kaplan and Norton, 1996). The internal business processes perspective includes internal processes that enable the business to deliver value propositions and satisfy shareholder expectations. Examples of measures include time to develop the next generation of products and process capabilities (Kaplan and Norton, 1996). The learning and growth perspective defines the infrastructure of the organization's marketplace sustainability; the infrastructure refers to people, systems, and organizational procedures specifically. Employee satisfaction, retention, training, and skills are examples of measures in this perspective (Kaplan and Norton, 1996).Use of the balanced scorecard approach has become an acceptable approach for evaluating IT performance; in an earlier study, Seddon et al. (2002) found that roughly 30 percent of organizations applied the approach to evaluate IT investment. Other scholars use the balanced scorecard in IT project selection (Asosheh et al., 2010), IT project performance (Milis and Mercken, 2004), IT-business alignment (Hu and Huang, 2006; Huang and Hu, 2007) and performance of IT systems such as enterprise resource planning (ERP) (Fang and Lin, 2006).The impact of IT on organizational performance is of interest to many scholars. Peterson (2002) and Ribbers et al. (2002) argue that IT performance is characterized by high levels of effective IT governance structures and processes. Ortiz (2003) found a strong positive relationship between IT governance implementation and organizational performance; Gu et al. (2008) did not find a direct influence of IT governance on IT investment performance in the USA and Canada. However, IT governance was found to moderate the relationship between IT capital and IT investments. Bowen et al. (2007) suggest that IT governance effectiveness influences IT governance and IT implementation success. Linking practice with strategy, the framework proposed here adopts the balanced scorecard framework for monitoring corporate performance as the impact of IT governance effectiveness. We propose that:P1. IT governance effectiveness is associated positively with organizational performance.\n5 Determinants of IT governance effectiveness: Based on a review of literature, we identify five factors that serve as determinants of IT governance: organizational demographics, information intensity, organizational culture, external environment characteristics, and IT functional characteristics.5.1 Organizational demographics\n6 Conclusion: Organizations face increasing intricacies in business given a multitude of challenges such as globalization, knowledge economy, and even disasters. At the same time, organizations are expected to conform to corporate governance best practices. IT governance is a part of corporate governance cited as a means to help organizations manage risk and protect themselves from technology-related losses. The importance of IT governance is evident through the attention it receives from scholars and practitioners. As noted previously, the focus of extant IT governance effectiveness research is separate from its link to determinants and impacts. Bridging this gap, this paper begins with the RQ1. As a result of a literature review, this paper develops a conceptual framework that integrates both determinants and impact of IT governance. It draws on literature from IT governance, SISP, SAM, business-IT alignment, ISO in information systems, and information security domains to establish a framework as a guide for further research and practice.For practitioners - especially board members, CEOs, CIOs, and those leading IT functions at senior and middle management levels - the framework offers insight into determinants of IT governance effectiveness in private organizations. Practitioners have a better understanding of IT governance effectiveness and its impact on organization performance, based on the balanced scorecard's four perspectives. Given such insight, appropriate actions may be considered to create and foster an environment that promotes IT governance effectiveness, which enhances effectiveness. For researchers, the framework clarifies the contributing factors of IT governance and its impact; it provides opportunities for researchers to validate and test the framework. The results of validation will further contribute to knowledge in IT governance and its role in corporations. This, in turn, enhances information systems and information security curricula.It is worth mentioning that the relationships between determinants (i.e. organizational demographics, information intensity, organizational culture, external environment characteristics, IT function characteristics and IT governance, and IT governance and organizational performance) call for more exploration. With insight into these determining factors, organizations further gain an understanding into their impacts on IT governance effectiveness. Consequently, this assists organizations with improvements in organizational performance.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to develop a conceptual framework that examines information technology (IT) governance effectiveness, its determinants, and its impacts on private organizations.\nDesign/methodology/approach: - The research draws on extant literature in IT governance, strategic information systems planning, strategic alignment maturity, information systems security, business and IT alignment, International Organization for Standardization in information systems, and organizational performance to identify determining factors for IT governance effectiveness, IT governance effectiveness factors, and organizational performance.\nFindings: - The results of review suggest 14 propositions and five factors grouped into determinants including organizational demographics, information intensity, organizational culture, external environment characteristics, and IT function characteristics. Linking organizational practices with strategy, the proposed framework adopts the Balanced Scorecard four perspectives approach for monitoring organizational performance as the impact of IT governance effectiveness. IT governance dimensions in the research comprise structure, process, and relational mechanisms.\nOriginality/value: - IT governance is a part of corporate governance to help organizations manage risks and protect themselves from technology-related losses. The framework provides a starting point for researchers and practitioners to further examine IT governance practices. For researchers, the framework clarifies the determining factors of IT governance, dimensions of IT governance, and impacts through proposed relationships. For practitioners, the framework can be used to gain insight into the contributing factors of IT governance effectiveness.\n", "role": "assistant" } ]
The sexual orientation wage gap in the USA
[ "Employment", "Family", "Gender", "Empowerment", "Equality", "Division of labour" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: As the gay rights movement gains momentum, greater attention is directed toward the discrimination faced by gay, lesbian, and bisexual individuals. Analyzing the existence and magnitude of a sexual orientation wage gap is highly relevant, as it provides a window into the discrimination faced by gay individuals in their daily lives. Furthermore, detailed analysis of the sexual orientation wage gap reveals much about the impact of child rearing in the emergence of wage differences between homosexuals and heterosexuals, and between men and women. Specifically, is the male/female wage gap a function of patriarchy, or of expected productivity loss resulting from childbearing and child rearing? Understanding the nature of the sexual orientation wage gap therefore sheds light on the determinants of the gendered wage gap as well.\n1. Literature review: Whereas popular opinion tends to believe that direct discrimination is the main cause of wage differentials related to gender and sexual orientation, the existing body of academic literature favors much more indirect causes. If outright discrimination is the main driver of wage differences, one might expect to see that the forces of heterosexism negatively affect the incomes of both homosexual and men and women alike. Most academic studies, however, find that homosexual and bisexual women earn more than heterosexual women[1], while homosexual and bisexual men earn less than their heterosexual counterparts[2]. The fact that homosexuality affects men and women differently hints that direct sexism and homophobic discrimination may not be the main causes of the wage gaps. In order to explain why homosexuality increases females' wages and decreases those of males, economists have examined the effects of differences in family duties, marriage and child rearing, and differences in job selection. Of course, these are not easily separable.\n2. Data description: The data used in this report come from the General Social Survey (GSS) for the years 1972-2014[4]. GSS data are conducted via in-person interviews and include a random sample of adult US residents. The sexual orientation of sex partners was not included in GSS data prior to 1988, so those earlier years are not included in our analysis.\n3. Analysis: With so few observations on bisexual men and women, and a potentially large number of categorical variables, variable selection is critical. We report results from five variable selection techniques: unrestricted OLS, a p-value-oriented \"general to specific\" variable selection strategy (what is often called the London School of Economics approach) (Darnell and Evans, 1990), least angle regression or \"LARS\" (Efron et al., 2004), the least absolute shrinkage and selection operator or \"LASSO\" (Tibshirani, 1996), and forward stagewise linear regression (Hastie et al., 2007). The independent variables include the variables delineated in Table I. Column (1) of Table III shows the results of an unrestricted regression, i.e. one including all variables. The general-to-specific stepwise method, the LASSO, LARS, and forward stagewise linear regression all select the exact same specification as the unrestricted regression (Column (2) in Table III). Column (3) re-estimates (2) but drops the children variable. The results are remarkably consistent across the different variations of the model; the ordering of wages across sexual orientations does not change, and most coefficients differ only in the hundredths or thousandths places.\n4. Conclusions: According to our analysis, homosexuality affects wages differently by gender. Our results are consistent with previous studies which find that homosexual women earn more than straight women, while homosexual men earn less than straight men. Homosexuality results in a wage discount for men and a wage premium for women; however, this premium is not large enough to compensate for the wage discount women receive due to simply being female.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to determine the nature of the wage gap between genders and sexual orientation.\nDesign/methodology/approach: The paper uses OLS on pooled repeated cross-sections.\nFindings: The differences in wages between gay/straight men and women mirror what would be expected from labor force attachment more so than direct heterosexism.\nResearch limitations/implications: The authors use a functional definition of sexual preference that reflects whether the respondent had sex with someone of the same gender in the same year. It does not ask whether the person identifies publicly as gay/lesbian/bisexual.\nOriginality/value: The authors verify and extend earlier findings on the sexual orientation and gendered wage gap.\n", "role": "assistant" } ]
An empirical analysis of search engines' response to web search queries associated with the classroom setting
[ "Education", "Children", "Readability", "Search engines", "Misinformation", "K-12", "Search engine results pages", "Web search queries", "Child-appropriateness" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Search Engines (SEs) are the \"go-to\" tools for children's online information discovery (Rowlands et al., 2008). Children's use of SEs goes beyond accessing sites for leisure purposes, as they also utilize these tools for school-related activities. In fact, teachers in the USA regularly assign search tasks that require the use of SEs to their students (Hussain et al., 2011; Scholastic, 2018). Children, however, are still known to experience difficulty completing successful search sessions (Gossen, 2016). A contributing factor toward improving children's search experience could be their understanding of how SEs work. Unfortunately, search literacy is rarely part of the Kindergarten to twelveth grade (K-12) curriculum (Campbell et al., 2018; Notess, 2006; Laxman, 2010). Even when search literacy is included, it can overlook information on the functionality of SEs (Buckingham, 2015). Further, it more commonly targets educators (Campbell et al., 2018; Notess, 2006; Laxman, 2010). Educational and child-oriented SEs, such as Kidrex[1] and Kidzsearch[2], could also alleviate challenges regarding the completion of search exercises. Yet, their use is not a requirement in the classroom. Instead, children tend to favor more mainstream, popular SEs, like Google and Bing (Foss et al., 2012). These SEs are generally tailored for adults and as such, are not necessarily equipped to support children's educational searches-those posted within the classroom environment or formulated with the intent of locating curriculum-related materials. Thus, children could access resources that neither align with requirements inherent to the classroom environment nor their capabilities. Technology enhanced learning platforms, such as online tutorials and personalized electronic learning, will continue to gain prominence in response to the iGeneration[3], the most tech savvy yet (Minal Anand, 2019). With training now being made available to school teachers (Google, 2019), SEs become natural partners to ease online information discovery and facilitate, whenever possible, searching while learning (Gwizdka et al., 2016). Therefore, it is imperative to understand the effectiveness of existing SEs in responding to search queries pertinent to the K-12 curriculum.\nBackground and related work: We discuss in this section how children interact with SEs, the use of these tools in the classroom, and the role of safe-search filters in preventing this audience from accessing inappropriate resources.\nMethod: We employ a simulation-based experimental approach to examine the strengths and limitations of SEs commonly utilized by children for conducting online inquiries pertaining to the third and fifth grade curriculum.\nResults and analysis: We report the outcomes of the analysis conducted to understand how SEs respond to search queries associated with the classroom setting:\nDiscussion: Based on the outcomes from the analysis we conducted in the results and analysis section, we observed differences in the way child-oriented SEs restricted access to inappropriate material, when compared to popular SEs. Unfortunately, this affected how they responded to educational inquiries as we found that these child-oriented SEs often prevented the retrieval of resources in response to searches conducted for locating curriculum-related materials. This was especially prominent in searches where the search queries contained keywords that suggested that the user intends to access inappropriate materials (e.g. female sexual anatomy which is related to human anatomy). Having educational searches fail to retrieve results is detrimental as this can prevent children from accessing the right resources and lead them to believe no resources exist, thus discontinuing their search. Incomplete search sessions can leave children with either no information or mistrust of SEs, making them believe that these tools cannot provide them with vital information.\nLimitations: A limitation of our work resulted from the small amount of unique search queries gathered from school-aged users conducting search tasks in the classroom (i.e. 100). We resorted to enhancing our search query corpus through analyzing and extracting search queries from child-written reviews.\nConclusion: In this work, we simulated searches using search queries that capture information needs in different contexts (3,850 unique search queries). We studied six strategies based on four SEs: Google, Bing, Kidrex, and Kidzsearch. Using these SEs, we evaluated their performance in terms of offering resources that are relevant to task objectives and user capabilities in response to classroom-related inquiries.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to examine strengths and limitations that search engines (SEs) exhibit when responding to web search queries associated with the grade school curriculum\nDesign/methodology/approach: The authors employed a simulation-based experimental approach to conduct an in-depth empirical examination of SEs and used web search queries that capture information needs in different search scenarios.\nFindings: Outcomes from this study highlight that child-oriented SEs are more effective than traditional ones when filtering inappropriate resources, but often fail to retrieve educational materials. All SEs examined offered resources at reading levels higher than that of the target audience and often prioritized resources with popular top-level domain (e.g. \".com\").\nPractical implications: Findings have implications for human intervention, search literacy in schools, and the enhancement of existing SEs. Results shed light on the impact on children's education that result from introducing misconception about SEs when these tools either retrieve no results or offer irrelevant resources, in response to web search queries pertinent to the grade school curriculum.\nOriginality/value: The authors examined child-oriented and popular SEs retrieval of resources aligning with task objectives and user capabilities-resources that match user reading skills, do not contain hate-speech and sexually-explicit content, are non-opinionated, and are curriculum-relevant. Findings identified limitations of existing SEs (both directly or indirectly supporting young users) and demonstrate the need to improve SE filtering and ranking algorithms.\n", "role": "assistant" } ]
Key performance indicators (KPI) for the sustainability of building energy efficiency retrofit (BEER) in hotel buildings in China
[ "Building energy efficiency", "Sustainability", "Hotel", "Fuzzy set theory", "Key performance indicators", "China" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: 1.1 Research background\n2. Research methodology: In order to achieve the objective stated previously, first, literature review and in-depth interviews with industry experts and academic researchers were conducted, which filters the performance indicators for assessing sustainability. Second, questionnaire survey was conducted to collect data from various group experts for analyzing the significance of the selected performance indicators. Experts were invited to indicate the significance of individual indicators by using the five-point Likert scale. Then, data analysis was conducted with both reliability and validity of the data were checked by the statistical tool Statistical Package for the Social Sciences (SPSS). Scale ranking for overall and each group was established based on the mean values of significance of indicators. Finally, a model based on the Fuzzy set theory was designed to identify the Key Performance Indicators (KPIs) for sustainability of BEER in hotel buildings.2.1 In-depth interview\n3. Data analysis: The data were analyzed using the SPSS. The reliability of the five-point scale used in the survey was determined using Cronbach's coefficient alpha, which measures the internal consistency among the factors. Previous study suggests that a value of Cronbach's alpha of 0.7 or above normally indicates a reliable set of items (Ceng and Huang, 2005). The value of this test was 0.761, which was greater than 0.7, indicating that the five-point scale measurement was reliable. Three statistical analyses, namely, scale ranking, ANOVA, and Fuzzy set theory analysis, were undertaken on the data. The analysis procedure and findings of the study are detailed in the following sections.3.1 Ranking of performance indicators\nDiscussions of findings: 4.1 KPI1 - Quality performance\n5. Conclusions: Building Energy Efficiency Retrofit (BEER) projects play major roles in energy & cost saving, carbon reduction, and environmental protection, particularly in hotel buildings. Their sustainability performance should deserve more attention when implementing the BEER projects. This study identified and ranked the KPIs for the sustainability of BEER in hotel buildings according to their importance, which is based on the views of experts with experience in BEER. Fuzzy set theory was adopted in identifying the KPIs. Eight KPIs were selected from primary 12 selected performance indicators. They are collected based on in-depth interview and literature review. They are:1. quality performance;2. hotel energy management;3. cost performance;4. project profitability;5. energy consumption & resources saving;6. health and safety;7. stakeholders' satisfaction; and8. innovation and improvement.This study focuses on the sustainability at project level. The traditional project management pays attention to project performance of \"iron triangle\" - cost, time, and quality. Quality and cost performances still have higher priorities in this study's findings. Other indicators related to energy, environment, and people's satisfaction are also identified as sustainable objectives.This research focuses on hotel buildings in China. Some indicators, such as \"hotel energy management\", are unique for hotel buildings. Because of the different requirements for building types and building ownerships, the KPIs of sustainability of BEER for different buildings may be variable. Findings in this study for hotel buildings may not be directly relevant to other types building. For further study, quantitative sub-indicators could be identified to measure the eights KPIs. Furthermore, the weighting system as well as benchmarking of each sub-indicator could be examined in future studies. By using identified KPIs, the sustainability performance of hotel BEER projects can be assessed. The application of KPIs can also help decision-maker to identify an optimal solution between alternatives, which presents the maximum sustainability performance.\n", "role": "user" }, { "content": "Purpose: - Building energy efficiency retrofit (BEER) not only provides excellent opportunities to reduce overall energy consumption of buildings in a city but also encourages environmental protection, the rational use of resources, and occupants' healthcare, which all contribute towards the sustainability of existing buildings. However, there is a lack of effective performance indicators to measure the sustainability of BEER projects. The aim of this paper is to formulate a list of key performance indicators (KPI) for the sustainability assessment of BEER in hotel buildings.\nDesign/methodology/approach: - First, a literature review and in-depth interviews with industry experts and academic researchers were conducted, which filtered the performance indicators for assessing sustainability. Second, a questionnaire survey was conducted to collect data from various groups of experts to analyze the significance of the selected performance indicators. Finally, a model based on fuzzy set theory was designed to identify the key performance indicators (KPIs) for the sustainability of BEER.\nFindings: - Eight KPIs were identified based on fuzzy set theory in this study. They are: quality performance, hotel energy management, cost performance, project profitability, energy consumption and resources saving, health and safety, stakeholder satisfaction, and innovation and improvement.\nPractical implications: - The KPIs of sustainability of BEER identified for hotel buildings in China in this study can be useful reference for other similar research. However, with the different requirements for building types and building ownerships, the KPIs of sustainability of BEER for different buildings may be variable. The findings in this study may not be directly relevant to other types of building.\nOriginality/value: - Key performance indicators for the sustainability assessment of BEER in hotel buildings in China are identified and analyzed in this study. The KPIs can help decision-makers to identify an optimal solution between alternatives, which presents the maximum sustainability performance.\n", "role": "assistant" } ]
The "organizing" paradigm in public administration: learning by focusing on customer orientation
[ "Training", "Organizational theory", "Organizing", "Customer orientation", "Public administration" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Public administration is a critical context due to demanding clients, barriers to equity and intrinsic organizational complexity. This complexity derives from the dual character of the public sector, that is to be both a means of delivery and also an element of societal self-governance. In this way, public administration has to contain the unresolved value conflicts and moral ambivalence of society, at the intersection of conflicting needs and alternative definitions of the common good (Hoggett, 2006). A specific element of the conflictual nature of public purpose is represented by the tension between an ethic of care (as a compassionate concern for the individual) and an ethic of equity (as a responsibility towards all cases equally) and by the tension between universalism and particularism (for instance, the impartial treatment of individuals may produce discriminations towards groups). Furthermore, public administration routinely faces the paradox of dealing with increasing demand amid decreased resources. This, in turn, creates a range of management challenges (Pokharel and Hult, 2010).\nThe study: context, sample and procedures: The context of the case study is a training device for public administration in the North of Italy. Public bodies chose to attend the course with one or two representatives to develop their competencies in conflict management at work.\nFindings: Quantitative analysis\nDiscussion: In the first meeting, participants pay attention most of all to the criticalities of the organization, by focusing on the constraints that do not permit to deliver a high-quality service. These criticalities relate to a general inattention of the organization to several issues, from individual motivation to leadership. The main actors of this organizational theory are on one hand the macrosocial dimension and on the other hand the individuals, in a sort of conflictual polarization. The common feeling is 'absence': the macrosocial dimension seems to be distant both from a temporal and a spatial point of view; it is placed before the concrete process and far away from the concrete persons who live in the organization. Therefore, this model seems to be not related to the concrete and situated organizational life.\nConclusion, practical implications and limitations: The study contributes to highlight the need for a practical focus on the conceptualizations of organizations in training. Within organization studies, the term \"organization\" is often underdescribed or treated as an end in itself rather than a 'dispensable means' (Du Gay and Vikkelso, 2014, p. 749). Representing the organization as a process of organizing allows participants to distinguish between the prescribed and discretionary parts of task performance (Jaques, 1976). In this direction, our proposal is to use the construct of customer orientation as an artefact that supports the pragmatic call to experience proposed by an approach to organization studies as a practical science of organizing (O'Connor, 2012). The reference to this construct appears as a tool which favours a conceptualization of the organization as a process of actions and decisions. Following Du Gay and Vikkelso (2014, p. 750) call to a practical science of organizing, in our proposal this tool shares with other conceptual devices the need to be used as:\n", "role": "user" }, { "content": "Purpose: Public Administration complexity derives from the dual character of the public sector, that is to be a means of delivery and an element of societal self-governance. The bureaucratic framework is a way to address this tension, while simultaneously public administration needs to focus on a more processual view of the organization to improve service quality. Training could be a key point to sustain such a transition, by focusing on customer orientation. The purpose of the study is to explore whether a training device focused on customer orientation could develop employees' theory on organization as being process-oriented, i.e. \"organizing\".\nDesign/methodology/approach: The longitudinal study is on a training case study in Italy. Participating in the training course were 34 employees belonging to 16 Public Administrations. Data on participants' organization representations were collected at the beginning and at the end of the course through the SWOT matrix. Data analysis used a quali-quantitative approach.\nFindings: Findings show a shift to viewing the service organization as a process implicated in uncertainties and unpredictable events rather than the realization of an ideal model.\nOriginality/value: The study contributes to highlight the need for a practical focus on the conceptualization of organization in training, as the term \"organization\" is often under described or treated as an end in itself.\n", "role": "assistant" } ]
Construction of meanings in business relationships and networks
[ "Construction of meaning", "B2B marketing", "Storytelling", "Business relationships", "Interaction behaviours", "Interactive communication" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Studies of buyer-seller relationships between organizations in the IMP research stream have highlighted the role of interaction processes in relation to the outcomes of such business relationships for the parties, for the dynamics of relationships and for business networks. Postulating interaction as a central business process that determines how business relationships develop (Hakansson et al., 2009) leads to the question: how do the interaction behaviours of actors in inter-organizational business relationships emerge? The question is a pertinent one considering that several studies show actors in business relationships act and react in a context that leaves little space for extensive problem solving and planned autonomous action (Hakansson and Ford, 2002; Guercini et al., 2014). Given the role of interaction processes in the development of business relationships, investigating factors that influence and shape interaction behaviours becomes vital if we are to explain how business relationships and networks develop.\n2. Market practices: The practice-based approach to conceptualizing markets and marketing originates from research in the field of science and technology studies. In particular, a central issue of Latour's theory was the role of material objects in what he called socio-material agency (Latour, 1987). However, the \"practice perspective\" consists of a rather composite field of studies (Nicolini, 2012) and has been pursued further in economic sociology and other social sciences (e.g. Schatzki, 2006). Mapping the practice-based research stream is beyond the scope of our study and competence. We will look only at the inroads the practice perspective has made in the marketing discipline.\n3. The socio-cognitive perspective: The socio-cognitive approach to conceptualizing markets and market dynamics began with examining the concept of the product market as it is used in economics and largely adopted in management. Indeed, in economics and management the market tends to be defined as any place in which the sellers of a particular good or service can meet with the buyers of that good or service; the emphasis is on buyers and sellers of a \"particular good or service\" and close substitutes (e.g. Tirole, 1988). Economists thus tend to conceive the market as an exchange mechanism, rather than an institution and social structure. The product market concept that prevails in marketing theory and practice has been the object of some criticism, as it does not consider the social factors in market formation and dynamics and on the whole tends to lead to an \"under socialized\" account of the market institution (Granovetter, 1985). Critics of the product market concept argue that the market concept has not been sufficiently scrutinized, and when addressed, this has not been done without enough rigour (Ellis et al., 2010; Venkatesh et al., 2006).\n4. Common ground and differences: Exploring the possibilities of \"borrowing\" from the two research streams to theorize the phenomena at the centre of interest for the IMP requires an examination of what the three streams of research have in common and how their approaches and conceptualizations differ.\n5. Construction of meanings in business relationships and networks: Two themes that emerge in the socio-cognitive and practice-based research streams can orient further conceptualizations of interaction processes in business relationships in the IMP research. First, both streams expound on how cognitive elaborations (interpretations and meanings) emerge and affect behaviours. They also offer some interesting insights that might orient future research on interaction processes in business relationships. Second, both streams have been concerned with communication processes and practices and their role in the cognitive elaboration of actors that might inspire research on communication practices in business relationships and networks.\n6. Final considerations on future research: While the elaboration of meanings and communication processes seem important in organizing, it appears particularly significant at the boundaries of formal organizations and under certain circumstances. The reason is that at the boundaries of an organization different thought worlds and knowledge structures are confronted. Confronting and connecting different knowledge and belief systems in inter-organizational business relationships involves both creating and coping with interdependences. There is enough episodic evidence in various business studies that suggests communication processes and the elaboration of meaning are particularly critical in inter-organizational business relationships that concern making use of (or benefiting from) others and being of use to others (Hakansson and Waluszewski, 2007). If we admit the interdependences as one of the central features of the business landscape, it requires of management to \"shift in actors' self-perception from self as independent to self as part of a larger whole\" (Bigelow, 1992), which can be demanding given that the management perspective traditionally implies a considerable degree of autonomy and independence of the single individual businesses.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to address an issue related to the role of interaction processes in the development of customer-supplier relationships in business markets.\nDesign/methodology/approach: Focusing on the role of cognition in interaction behaviours in business relationships, the authors examine two research streams that offer perspectives on interaction processes akin to the IMP - the socio-cognitive perspective and the practice-based approach to markets and marketing.\nFindings: The two research streams analysed contribute to understanding the link between cognition and interaction behaviours by pointing to the construction of meanings as an important factor in interaction behaviours and indicating storytelling as a tool to construct meanings among the actors.\nOriginality/value: This paper is among the few studies that focus the attention on communication processes in business relationships and networks.\n", "role": "assistant" } ]
Building regional capacity: lessons from Leadership South West
[ "Leadership development", "Organizations", "Regional development", "Government policy", "United Kingdom" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Building leadership capacity across the public, private and voluntary sectors has become a fundamental objective of UK government policy (DfES, 2002). Improvements in leadership and management are regarded as a driving force in the reform of public services and central to innovation and growth in the commercial sector, yet considerable concern is expressed about the extent of the \"skills gap\" (DTI, 2004; CIM, 2007).In response to calls for \"better leadership\", bodies such as Regional Development Agencies (RDAs), Learning and Skills Councils, Sector Skills Councils and other governmental and non-governmental bodies are increasingly working to build capacity at local and regional levels and higher education institutions (HEIs) regarded as key facilitators in this process (Leitch, 2006). Despite this, there remains little agreement on the nature of leadership (Grint, 2005); little consensus on how best to develop it (Barker, 1997; Day, 2001); and remarkably little evidence of its impact on individual and organisational performance (Burgoyne et al., 2004). Leadership, it would seem, \"is like the Abominable Snowman, whose footprints are everywhere but who is nowhere to be seen\" (Bennis and Nanus, 1985, p. 19).Given these very real concerns what role can an HEI play in regional capacity building for a concept such as leadership whilst also delivering tangible outcomes for the investment? It is this question that this paper hopes to address - to map out a form of engagement that grows critical awareness and reflection on leadership development and practice across a geographic area whilst also strengthening regional infrastructure (in terms of networks and partnerships) and delivering tangible learning outputs and products. In order to do this we will reflect on our joint experience of involvement with the Leadership South West (LSW) initiative in the South West of England from 2003 to 2006.\nLSW: a case study in HE-led regional capacity building: Background\nDiscussion: The account of LSW presented in this paper offers some rich insights into the processes of developing regional capacity. It is unashamedly our own story and, as such, may downplay alternative perspectives. It is without doubt that many of the successes and impacts are the result of broader collective effort yet, in this paper, we have endeavoured to draw out the key role that an initiative such as ours was able to play. In this section we will consolidate these insights into ten general lessons that could be applied to similar initiatives:1. Invest in people. An initiative such as LSW stands or falls on the quality of its people. It was about creating a dialogue between HE, independent providers, policy makers, employers and other stakeholders in the region and, as such, required face-to-face human contact. Whilst technical solutions such as websites and toolkits may be tempting in terms of producing tangible outcomes for the investment they are relatively poor at driving forward discussions, building relationships and establishing a shared vision. The LSW project manager possessed the energy, enthusiasm and experience to create such connections and whilst other solutions may now be possible we doubt they would have been effective from the outset. Furthermore, the project manager was careful in the people she chose to work on her team and invested considerable effort and resource in their development, thereby creating a collective expertise that endures beyond the current initiative.2. Create webs. Phase One of LSW was primarily concerned with the creation of \"webs\" of relationships in the region. This term is chosen deliberately for a number of reasons. Webs are about connection - building a structure for communication and collaboration. They are created for a specific purpose and require the active engagement of a builder/facilitator. Webs take time to construct but are ultimately rather fragile, difficult to see and constantly changing. Likewise, when applied to leadership, webs inevitably involve some \"spin\" - a mechanism for the creation of sense and meaning in complex social situations (Weick, 1995). Webs, however, also hold the risk of entrapment - arguably we are all caught in webs of meaning that we ourselves have spun (Geertz, 1973) and with the proliferation of leadership and management initiatives both within and beyond the region there is a great danger of initiatives becoming self-serving. Where projects vie for dominance over one another partnerships can break down into competition - in remaining as apolitical as possible, it is our sense that LSW largely managed to avoid this trap and was able to bring competing organisations to the discussion table.3. Find trusted partners. Key to the development of wide-ranging impact is the establishment of mutually beneficial strategic alliances. Through working with partners such as the IoD we were able to increase our reach and begin to influence decisions at regional and national-level. Such relationships need to be founded on a deep level of trust between partners, not just between organisations but between the people within them. Establishing such relationships takes time and a certain degree of luck. It requires opportunities to demonstrate credibility, competence and building a sense of legitimacy where partners feel safe to collaborate without fear of competition. When partnerships are working well, the individuals within them become more important than the organisations they represent. They also enable the development of a \"bank account\" of goodwill and/or reciprocal benefits that is credited and debited over time.4. Identify quick wins. A publicly funded initiative such as LSW needs to deliver on a range of agreed outputs. In order to stay on track, whilst leaving sufficient time to work on less tangible outcomes, it is important to identify the likely source of these outputs as early as possible. Once again, in the case of LSW, the IoD franchise proved invaluable and offered some quick wins both for LSW and the IoD. Furthermore, when offering leadership and management development to employers, some quick and tangible outcomes are helpful in building commitment and a willingness to invest in longer-term growth.5. Build on success. As a relatively small initiative LSW was able to expand through building on previous successes. Initiatives such as the IoD Director Development Programme were used as stepping-stones for subsequent initiatives such as Inspire. By operating in this manner we were able to establish a track record and plan for growth organically and incrementally. In a similar vein, where initiatives were less successful we learnt from them and moved on. Over time, we learnt to focus on what we were good at and to bring in additional expertise where required.6. Be adaptable. The process of regional development is complex and unpredictable. Despite being only vaguely aware of this at the time, we found ourselves drawing on ideas from complexity and systems theory. We learnt that sometimes small input can lead to large impacts and vice versa. We learnt to focus our effort where it gave the greatest returns and not to persist with ideas that were proving unproductive. We learned to change and adapt to our surroundings and the continually changing landscape of regional policy and delivery. Remaining alert to our changing context was an important factor in targeting the most significant levers for action at a particular moment in time.7. Challenge the status quo. An initiative such as this inevitably confronts numerous challenges in the way HE is structured and delivered at a regional level. The project manager found herself working across the academic/administrative divide and straddling business, governmental and university cultures (what Whitchurch, 2007 refers to as a \"blended professional\"); the research fellow found himself working on projects that were in tension with traditional academic priorities; and we found ourselves delivering programmes to people who generally wanted input from practitioners rather than academic faculty. Throughout this, though, we endeavoured to take the end-user perspective - considering what could add value for them. The funding and relative autonomy of our project gave reasonable leverage to challenge restrictive university procedures but this remained subject to negotiation on a project by project basis.8. Embed learning. At the heart of this initiative lay a genuine commitment to learning. Throughout we created opportunities for the two-way flow of information and understanding. We engaged in dialogue to better appreciate the practitioner perspective and endeavoured to raise the level of the debate across the region. Whilst some formalised research and evaluation was conducted and disseminated during the course of this initiative much of the learning could be conceived of as informal and permeated throughout the LSW ethos - thus, whilst not initially conceived as an \"action research\" project, the LSW approach was largely structured around cycles of planning, action and review. Indeed, much of what we did could be conceived of as \"evaluation for knowledge\" to obtain a \"deeper understanding in some specific area or policy field\" (Chelimsky, 1997, p. 100). Within this respect HEIs can act as more than knowledge repositories, becoming hubs for learning and the exchange of ideas and practice within their regions (a view recently echoed by Hogarth et al. (2007).9. Plan for succession. It is sometimes said that the sign of a good leader is that they work themselves out of a job. Thus, one eye is always to succession and change - to ensuring the long-term viability of an enterprise without generating dependency on a single individual. This is the view that was applied to LSW - programmes were only run where financially sustainable so as to minimise dependency on external funding, and capacity was grown within the system rather than restricted to key individuals. Such a perspective is essential to ensuring sustainable regional development and it was important for LSW to \"walk the talk\" - to be seen to lead in the way that it espoused.10. Think big. A final lesson from LSW was the importance of taking a broader view on policy, practice and thinking about leadership and management. We discovered that the region was best served through research and activities targeted at the broader national and international picture rather than focussing solely on initiatives within the region. By taking such an approach LSW was able to establish its credentials as a world-class centre with a regional outlook and, by consequence, enhance the profile and impact of sponsors and partners as well as raising the aspirations of a provincial region.\nConclusion: This paper has presented the manner in which the LSW initiative achieved wide scale regional impact from a small team and limited resources. Central to this approach has been a commitment to the principles of learning, sharing and applying and an endeavour to gain influence through strategic alliances and partnerships. Initiatives such as LSW offer powerful platforms from which universities can engage with regional employers and policy makers, making a lasting impact within the region whilst also developing commercially viable products and programmes. Such initiatives constitute effective outreach activity that can greatly increase participation in both accredited and non-accredited HE provision and facilitate multi-directional knowledge exchange between different stakeholders within and beyond the region.Fundamental to the success of such initiatives is the development of broader social networks and social capital. Whilst not readily measurable in any objective sense it argued that such capacity is inextricably linked to regional performance through raising the quality of debates and the precision of supply. Growing a discerning client base and informed policy makers is a process in which HEIs can take an active part by becoming regional hubs and knowledge repositories. We, the authors, would advise that greater attention is given to these considerations when investing in regional capacity building and promoting employer engagement.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to present experiences and insights from an higher education-led initiative to build leadership capacity within the South West of England in order to the shed light onto the processes and mechanisms of regional capacity building.\nDesign/methodology/approach: - The approach was one of participative action research, whereby the authors were actively involved in the shaping and delivery of the initiative, responding to ongoing feedback and reflection. The account given within this paper is an autoethnographic case study that identifies the main phases and lessons learned from the initiative.\nFindings: - The paper identifies a number of discrete phases within the initiative, some of the challenges and how they were confronted and concludes with a set of ten principles that may help support regional capacity building initiatives for management and leadership.\nOriginality/value: - Despite increasing emphasis on capacity building and a tendency to promote leadership as a lever for change, limited academic research has been conducted into either of these processes at a regional level. This paper seeks to contribute to both theory and practice in these areas by combining the insights of an academic and a practitioner involved in one such initiative and highlighting the underlying and emergent processes therewith.\n", "role": "assistant" } ]
Personal development plans: evidence from research on their use
[ "Self development", "Workplace learning", "Performance appraisal" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: As the term indicates, a personal development plan (PDP) is a document completed by an individual that details their intentions and actions with regard to their own development. PDPs have become increasingly popular in organizations as perceived demands for staff to become more adaptable, flexible and empowered have grown in response to internal and external organizational change. Moves towards flatter organizational structures and the adoption of ideas about employability rather than security of employment have given credence to the view that individuals should take greater responsibility for their own development. The adoption of PDPs is seen to be in tune with the times. In addition to being a tool to be used by individuals for thinking through and recording their development, completed PDPs can provide a potential source of data about the organization's changing stock of human capital.\nExamples of the use of PDPs: Rudimentary versions of PDPs have been associated with traditional training courses for many years, typically at the end of the course under the heading of \"action planning\". To a large extent we can attribute this to the concern of course providers to address the \"transfer of learning problem\". It might be provocative to note that use of the term \"action planning\" suggests that after a course no new learning is required, just action.There has also been a long tradition of linkages between planning for personal development and appraisal, and for those organizations where there has not been this tradition, modifying the appraisal system can be seen as offering an apparently quick and cost effective means to introducing PDPs. However the history of combining the two is not a good one on the whole. It is still not uncommon to find that where personal development planning is combined with appraisal, the paperwork that accompanies the process tends to privilege training rather than work-based learning. The standard PDP form in these circumstances may even go so far as to provide the list of courses currently offered by the training department to be ticked as appropriate. Not only is learning then confused with training, but also appraiser and appraisee may even find themselves using course titles to define initial learning needs.On the other hand many organizations now provide more sophisticated PDP forms for the use of their staff. Their precise structures vary, although typically they are designed to cover a couple of areas. The first of these is concerned with where the individual is now. The learner's current role is analyzed in terms of skills and knowledge requirements and then the ways and extent to which the person matches these are identified. This analysis points up the \"gaps\" in the latter and indicates the individual's learning needs. The second area is a plan of the steps to be taken to bridge these learning gaps. Sometimes a third element is added. This is typically some form of learning log so that progress in the achievement of intended outcomes, plus any unanticipated learning, can be recorded.Of course, however sophisticated its design, a PDP is still only a completed form. It is only likely to be of value if what it conveys is grounded in sound diagnosis and, subsequently, if real development actually follows through from its implementation.\nPDPs and diagnosis: It is not often that the process of creating PDPs stands alone. Typically it is associated with other formal methods for managing and developing staff, like appraisal for example. Another of the common associations, although in comparatively fewer organizations than is the case with appraisal, is with development centers. A well-designed development center can make a considerable contribution to the creation of a valid and useable PDP.Other potential sources of information and analysis for the learner include their manager and colleagues. They are potential sources of feedback in identifying learning needs, for example 360-degree feedback. Further, the learner's manager can be a sources of information about broader organizational aims and issues to inform their (the learner's) decisions and can \"represent\" the organization in ensuring integration of its needs with those of the learner.\nPDPs and development: Despite the confidence that the learner may have in the validity of their plan and the motivation to learn that comes from it, it is still only a plan. Even the best laid plans may be no more than wish-lists in practice. The individual is much more likely to experience success in implementation and gain real development when they have the support and challenge of others who are significant to them and their learning, and who have been involved in the diagnosis and other aspects of the creation of their plan.The learner's manager is in a potentially powerful position structurally in making things happen. Along with the learner's colleagues they can provide coaching or mentoring. Fellow learners are comrades in a common endeavor and can be involved both in the development of each others' plans and in their periodic review subsequently.\nPossible benefits of using PDPs: The following are some of the claimed benefits for the use of PDPs that came from the research:* Relevant learning and development is more likely to happen in practice when the person is goal directed.* Learning that is planned is more efficient.* Unanticipated learning opportunities are more likely to be seen when the person is prepared for them.* The choices of learning methods are more likely to be appropriate following completion of a PDP and their use can be designed and managed to provide a tailored fit with the learner's needs and interests.* Motivation and confidence in taking responsibility for one's own learning can be enhanced.\nPossible limitations of PDPs: Research indicates that there can be limitations to the use of PDPs and the key ones are summarized below:* The quality of a completed PDP can suffer without timely and relevant diagnostic information from others.* The creation of a valid and useable PDP is particularly difficult to achieve without the active support and agreement of others who are relevant to a person in their current role.* The successful implementation of even a well-crafted PDP is not guaranteed without continuing support and challenge from others.\nConclusion: Despite the potential pitfalls in using PDPs it's difficult to imagine that just random learning experiences can be the best way forward for individuals and their organizations. However, merely to pay lip service to the notion without rigorous and well thought-out processes can be of limited value.\n", "role": "user" }, { "content": "Purpose: - The article aims to investigate personal development plans (PDPs), which have become increasingly popular in organizations as perceived demands for staff to become more adaptable, flexible and empowered have grown in response to internal and external organizational change. The adoption of PDPs is seen to be in tune with the times.\nDesign/methodology/approach: - This article draws from research into work based learning by Strategic Developments International.\nFindings: - The study shows that, despite their popularity, PDPs are used patchily and only a few organizations use them well. One problem is that \"Personal\" can come to mean disconnected from work or from the organization's strategy. Second, we see too often that these plans stay as interesting plans that do not get actioned. A third concern is that the PDP is often very short term. However, just because many people and organizations use PDPs badly does not undermine the value of planned learning. It just means that learners and their managers need to manage the process well so that they and their organization get the maximum value from it.\nOriginality/value: - The article provides guidance on how to get the best from the PDP process.\n", "role": "assistant" } ]
The logic of the gift and the bonding value: a new perspective for business management
[ "Gift", "Relationships", "Bonding value", "Business management representation" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. The meaning of business relationships: the so-called \"gift model\": In this section we are going to clarify the meaning of business relationships, using the so-called \"gift model\". From an epistemological point of view, such a model tries to overcome the dichotomy between methodological individualism and holism. Methodological individualism considers people as isolated individuals that are rational and selfish calculators. Holism, on the other hand, considers interpersonal relationships as a matter of fact; individuals have only to adhere to those values, rules, culture and social functions that are required by organizations or society.So we are looking for an approach that looks in substantial depth at the nature of business taking seriously relationships into account, not only analysing their network structure, but also their content and, most of all, their meaning. We should analyse the content of business relationships going beyond the often simplistic and partial explanations that are based on self-interest and individualism. We have to expand and complement utilitarian principles, trying to add other motivational sources to the self-interest motivation, so making the analysis of relationships more close to reality. Introducing the concept of \"altruism\" may not be enough if it remains linked to an impersonal, individualistic and non-relational approach (Zamagni, 2002). According to some scholars, altruistic behaviour only serves if the agent's utility is positively correlated to the recipient's behaviour (Becker, 1974), or as a strategic and instrumental action, carried out to improve reputation and thus to obtain benefits in the long term (Andreoni, 1990), or using a moral imperative that binds the agents (Etzioni, 1986).We shall now introduce the so-called \"gift model\", and attempt to design a model of social and individual economic action that can be more correct from a relational perspective. The \"gift model\" does not focus on the analysis of what circulates between people, and so it does not mean business relationships as instruments for the effective and efficient circulation of goods; rather, it focuses on the intrinsic meaning of relationships, believing that the circulation of goods is instrumental for the development of relationships.We inevitably begin this venture by going back to the first scholar to study this issue from an anthropological and sociological perspective, Marcel Mauss (1950). Studying the so-called \"archaic societies\", Mauss remarked that the circulation of goods usually starts from a unilateral act, a \"giving\" act; this act somehow compels the other party to \"receive\" and then to \"give back\". This kind circulation of goods is mainly motivated by the will to build and develop social relationships rather than by the pressure of purely economic or utilitarian reasons. So the act of \"giving back\" does not follow the equivalence rules, typical of market exchange. In that case we have to give back something that is economically equivalent to what we have received. There is a sort of uncertainty on when and how the party will give back; the person who first triggers this cycle cannot know if, when nor how much he/she will receive; he/she only knows that through his or her act he/she is offering the other party a chance to build and develop a relationship.Analysing the so-called \"archaic societies\", not only in the past but also today, and all those social groups where basic relationships are strong, many scholars after Mauss (Caille, 1998; Godbout, 1992, 2007; Latouche, 1997; Schrift, 1997) have argued that the circulation of goods is made in the form of gifts; moreover, it is motivated not only and not so much by self-interest, but also by the intrinsic pleasure of \"giving\", by social obligations, and by spontaneity and freedom.Now, if the development of relationships is the priority, the focus shifts from what circulates to the meaning of circulation. In this perspective, the main objective is not researching some gain; people don't calculate if they are earning something from the exchange. What counts is if the circulation of goods is meaningful for the development of relationships.In the \"gift model\" we don't need to reduce the basis of all people behaviours to a self-interest motivation, as the individualistic and utilitarian paradigm does, or to a duty and obligation motivation, as the holistic paradigm does. The gift as a system of social action is intrinsically multidimensional, plural, and stands against any form of reductionism.\n2. The concept of value in relationships: towards the \"bonding value\": In this section we want to focus on the concept of value in relationships, trying to go beyond the common notions of exchange and use value.Economics has always focused on what circulates, that is on the goods that are exchanged. In this approach, actors are \"rational\", in the sense that they try to maximize their utility calculating and balancing their benefits and costs. The best way to represent such value is to reduce it to a quantitative equivalent, which is represented by a sum of money set by supply and demand. This is the exchange value.Use value represents the utility coming from an object or a service; it has a subjective meaning, while exchange value is objective, and has an individualistic perspective, because it measures how the actor that is involved in an exchange values the goods he/she receives.If we adopt the above-described theoretical framework, the two traditional concepts of value are not enough. We have to look for a concept of value that is able to take the development and the reinforcement of relationships into account.Here we propose the concept of bonding value (Godbout, 1992) that is related to the fact that gift-giving fosters business relationships and enriches people that are involved in these. Unlike use value, which tends to measure only the instant use of goods, bonding value, even if it is difficult - if not impossible - to calculate, shows that goods are instrumental for relationships, and not vice versa. The same object can have a very different bonding value, depending on the sort of relationships in which it is found.Now, if it can be generally accepted that gift-giving increases bonding value, we have to understand if it has a neutral sense from human values point of view, or if we should make a positive or negative ethical judgment of it. Actually there are social contexts where strong and intense relationships are characterized by debatable values (e.g. contexts characterized by bribery and corruption) or by undesirable values (e.g. Mafia social relationships).For instance, gift giving is a prevalent social custom in China in all areas of life, and expresses deeper socially-embraced behavioural ideals and norms of mutuality and \"right relationships\" between people. It is an expected behaviour that shows respect to another person and strengthens relationships. However, the lines between gifts and corruption are often blurred (Steidlmeier, 1999). Gift giving is often associated with bribery when the receiver is invited to pursue personal interests at the expense of the legitimate aims and objectives of other people (Millington et al., 2005).So, if business relationships analysis had to stop at the meaning of what circulates among people that are directly involved in the said relationships, it would seem sure that relationships development is \"good\", at least for those people. If the relationships analysis was extended to the wider social context where such people are embedded, it would probably not be enough to state that the development of relationships is \"good\".We think it necessary to seek some ethical criteria to better analyse relationships, in order to allow us to give a non-neutral ethical judgment, looking for the correlation between bonding value and human values. Referring again to the \"gift model\" (Caille, 1998), we can say that people's behaviour in business relationships can be based on four motivational sources, and so on four driving values. These can be ideally demonstrated on two axes:1. On the first one we can find the interest motivation, with two opposite poles:* interest to, that is selfish, instrumental and extrinsic; and* interest for, that is intrinsic, it can be altruistic, and sometimes it may result in disinterest or pleasure.2. On the second axis we can find:* freedom, that can lead to spontaneity; and* duty, that can come from a social obligation (as in rituals or ceremonies) or from an inner and moral obligation (as in Kantian philosophy).If gift giving is mainly based on interest to something, it can be a sign of a selfish behaviour, that can bring some advantages to one or both the parties directly involved in the relationship, but can bring disadvantages to some others. If gift giving is based on other driving values, it can develop bonding value for a wider network of people, such as the community (especially for those behaviours mainly driven by duty and obligation) or a more or less wide social context. Probably, when gift giving is mainly motivated by spontaneity and by disinterest, the bonding value that is created can be richer and deeper, because the relationships it is based on tends to transcend those people who are directly and indirectly involved in it. This may be the case where people express and fulfil their humanity. When they give just because of an interest to something or a sense of duty, they are not following human nature. Actually we think that people are by nature donors, because they are in a state of \"permanent debt\", towards others, or society, or even God.One might ask where the force that drives us to donate comes from. Perhaps it comes from the fact that the gift system does not begin by giving but by receiving. If people are \"rich\" and \"abundant\" rather than \"needy\", then they are in a sort of permanent debt. They give because they realize that they belong to the same system of gift, to the same system of relationships, to a state of mutual positive debt. Goods circulate as gifts because of this system of relationships that is at same time intrinsic and transcendent to them (Anspach, 2002).To end this section, we'd like to point out some elements that can be used as signs to understand if a specific system of goods circulation develops some bonding value. These indicators could provide some orientation for managers in designing their business relationships both inside (e.g. with workers or shareholders) and outside (e.g. with customers, suppliers, banks, etc) the firm. Some of these are:* Lack of an explicit negotiation: when the receiver gives back something, he does it as a gift; the donor tends to deny the gift, using some expressions that try to rest importance from this act and to avoid making the receiver feel indebted.* Excess and lack of calculation: the excess in gift giving is a means to avoid the mechanical and individualistic calculation of utility, to give space to the true meaning of the act of gift giving.* Voluntary uncertainty: when people realize that they are acting according to certain rules, they seek to circumvent and deny them (Caplow, 1984); uncertainty becomes positive, we move away from the relative certainty of a market exchange; if we made a gift compulsory or linked to some rules, we would devalue it, and the gift would no longer be able to create some bonding value.* State of mutual positive debt (Godbout, 1996): the parties that are involved in the relationship think to give others less than they receive; both actors do not feel obliged to give back something in order to reduce the debt, but they still want to give as much as possible.* Plurality of motivational sources: people's behaviour can not be reduced to interest to something, nor to obligation and duty; to develop \"good\" bonding value it has to be based also, and most of all, on spontaneity, interest for someone and disinterest.\n3. The \"bonding value\" in the firm: some ideas for a possible use in business activities: 3.1 A multi-dimensional approach\n4. Conclusion: In this article we have attempted to promote some consideration about bonding value in business management. We first offered a conceptual clarification and then we tried to think about the possibility of using bonding value in business management. The importance of bonding value comes from the importance that is given to relationships and how they are interpreted.An instrumental conceptualisation of relationships leads us to consider them as mere instruments for achieving better performances. This approach considers the firm as an ethically and socially neutral mechanism to achieve some profits while respecting any pertinent legal constraint (Friedman, 1962).If we consider the firm as a human organization, the meaning of relationships radically changes. The purpose of the business becomes the fulfilment of human needs through a culture that is truly humanizing (Mele, 2003). In this sense, considering that one of the most important human needs is the need for sociality, the relational dimension becomes one of the elements that qualify the development of a firm (Sorci, 2009). Bonding value fits into this conceptual framework as an element able to extend the analysis on the effectiveness of the firm. Bonding value and the gift logic can therefore be proposed as new parameters to weigh up whether or not the firm has achieved its institutional purposes.One aspect that would need further investigation is the ethical dimension. In particular, a key point concerns the social level at which extend the assessment of the social relationships and the system of values that we consider. The strengthening of the relationship is usually good if we consider the parties to the relationships, but is not necessarily so if we extend our consideration to the whole community. Which values are to be considered? The answer, which is beyond the aims of this article, is not an easy one to find, and it can be probably found in an ideal continuum between a relativist and a universalistic approach.To give substance to the notion of bonding value we considered certain issues related to its practical use in the firm. We have tried to draw a logical scheme to understand how the decision making criteria can be greatly changed.Many questions are still open: What is the impact of the firm size and sector on these issues? What considerations can be made about the financial, time and knowledge resources that are needed to take bonding value into account?This article is therefore a starting point of a possible conceptual path that should finally aim toward the theoretical and practical use of \"bonding value\" in the firm.\n", "role": "user" }, { "content": "Purpose: - This paper aims to conceptualize business relationships so as to include in these dimensions of both gift and gratuity, to develop the concept of \"bonding value\", and to offer some ideas for the use of this within firms.\nDesign/methodology/approach: - The paper analyses the concept of \"bonding value\", distinguishing it from the concepts of \"exchange value\" and \"use value\" that are normally used in business, and attempts to investigate its practical implications.\nFindings: - The paper shows how evaluating both the relationships within the organization and between firm and stakeholders in terms of \"bonding value\" can have effects on the economic, organizational and social aspects of a firm's activities.\nResearch limitations/implications: - The paper is a starting point of a possible conceptual path that should be directed toward the theoretical and practical use of \"bonding value\" in business. It is necessary to support the theoretical considerations with future empirical investigation showing the possibility of practical applications of the concept analysed.\nPractical implications: - The main implication for business management is the possibility to propose \"bonding value\" and \"gift logic\" as new parameters in assessing whether or not a firm has achieved its institutional purposes.\nOriginality/value: - In the past, \"bonding value\" has been studied mainly from a philosophical and sociological point of view. In this article we try to begin to address this issue specifically from a perspective of business management.\n", "role": "assistant" } ]
Developing criteria for the withdrawal of print content available online
[ "Serials management", "Electronic journals", "Collections management" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Access to increasing numbers of electronic journals, both through current subscriptions and the growing availability of archival backfiles, expands the definition of what it means to build a collection. It allows libraries more collection management options in the face of competing priorities, such as limited physical space, limited financial resources, and changing customer behavior. The University of Arizona's (UA) Science-Engineering Library (SEL) is taking advantage of online access as a replacement for print journal copies. SEL is removing print journal copies from the shelves as archival backfiles that meet our standards for perpetual access and commitment to digital preservation are purchased. This is a strategy designed to save space and money, as well as provide flexibility in meeting changing customer needs.\nPurpose: There is an underlying tension between the traditional print collections of journals and the electronic repositories that have been created to replicate their content. Librarians are skilled at maintaining a physical repository for print content and are always conscious of ways to maintain this material for future generations through efforts such as physical preservation or microfilming. As the Web makes electronic journals a more important format for most collections, libraries continue to be concerned with maintaining this content for the future and have actively worked to insure that electronic journal repositories are stable and sustainable. More and more publishers and vendors are making back runs of journals available online, many from the very first issue. For libraries that purchase these backfiles the question is this: If the online version duplicates the print copy, can or should both be maintained?Libraries face numerous choices in dealing with the print volumes that these electronic archives duplicate. Print journals can remain on the shelf, be removed to storage (on- or off-site), be given away, or be discarded. Depending upon financial resources, physical space, and customer need, libraries may select one or more of these options. Some libraries choose to pay for and keep both formats despite the overlap. One reason may be that for libraries with print volumes and less pressing space constraints, it may be simplest to leave the print collection as is. There may also be several other reasons, including distrust of the permanence or stability of electronic archiving (Duranceau, 1998; Flecker, 2001; Hughes, 2002; Reich and Rosenthal, 2003) and maintaining the traditional role of libraries as repositories (Duranceau, 1998; Scherlen, 2004). Furthermore, electronic journals do not always duplicate the print exactly. For instance, advertisements or erratum may not be included or images may not be adequately reproduced.However, for others such as the UA SEL, shrinking budgets and space constraints make print retention an untenable and non-strategic plan of action. How does the Library move towards a future collection that meets users' needs while freeing space resources and avoiding a situation of continual crisis? The UA Library makes choices in collection development that take a \"transformational\" vs. \"muddling through\" approach to this situation, as defined by Guskin and Marcy (2003). In this case, \"muddling through\" would be to make incremental changes in collection management decisions, just enough to avert a short-term crisis. Unfortunately, this may set up a situation that could result in a pattern of continual crisis, leaping from one bad situation to another.In a \"transformational\" approach, the UA Library analyzes the entire situation and makes substantial changes that transform the process. Even for the most resource-rich libraries, it is virtually impossible to maintain a comprehensive \"just in case\" collection. For libraries with fewer resources, such as the UA Library, it is unthinkable. Given these constraints, a goal of UA Library collection efforts is to build a collection appropriate to a known level of need and to make services available that provide additional materials \"just in time\" as necessary. It is not essential that every library act as a print repository, especially if this means the removal of print journals that are duplicated in online archives and are common to the holdings of other libraries. Given the specific context of SEL (as described below), the UA Library is removing print journals as electronic backfiles are available. These backfiles are evaluated for completeness, quality, and publisher commitment to digital preservation. At a minimum, the publisher must include a license that allows for perpetual access.\nContext of the UA's SEL: The need to rethink the nature of collections is particularly acute in SEL. A number of factors are converging to make the reassessment of information access a priority.There are numerous dimensions to this situation, largely interrelated:* physical space;* library-wide collection policies;* budget;* availability of electronic resources;* user needs and behaviors; and* historical significance of collections.Physical space\nJournal backfiles: In 2002-2003, the UA Library purchased the complete run of available ScienceDirect backfiles, an extensive collection of electronic journals covering all areas of science and engineering. Although the backfiles were purchased primarily to meet our customers' demand for more online content, we also saw an opportunity to relieve overcrowding in the stacks by withdrawing the print equivalents from SEL. As the Library had recently opted for electronic only subscriptions to Elsevier ScienceDirect journals, our print holdings of these journals became closed runs in 2001 or earlier. Furthermore, the Elsevier ScienceDirect license incorporates terms for perpetual access and specifies their commitment to migrate content to current technologies as appropriate (Verheggen, 2004).What was first thought to be a relatively straightforward project that could be finished quickly turned out to be very time-intensive. Although a significant amount of print content was ultimately withdrawn, there are unavoidable trade-offs that will be discussed below. In the end we hope that our customers will benefit both from a collection that is appropriate to local need and is increasingly available online, yet not be inconvenienced by the withdrawal of these print journals. We also believe our experience will serve us well as we evaluate backfiles from other publishers for the purposes of withdrawing additional print journals.\nMethodology: We used information from the Elsevier website to create a database of the titles and years available in the backfile collections we had purchased. Each title was checked in SABIO, the UA Library Catalog, to determine which titles SEL owned in print and the extent of those holdings. Student assistants then counted the number of volumes of the print equivalents in SEL and the shelf space occupied by these journals. For certain subject collections we also checked the catalog of the University's Arizona Health Sciences Library (AHSL), which is a 15-minute walk from SEL.From this subset of local holdings each online equivalent in ScienceDirect was checked for a complete sequence of issues and volumes, although only for the range of years SEL owned in print. It was noted that some issues were marked by the publisher as \"Not yet available.\" Other missing content was evident as unexplained gaps between sequences of volumes or issues, or occasionally pages within an issue. We recorded this information in separate database fields, and later shared the information about these problems with the publisher.Based on our knowledge of certain journals that rely heavily on detailed graphical elements to convey information, a selection of online articles from these journals was checked to assess the quality of images and illustrations. We did not have the resources to evaluate every title, but instead used our best judgment when deciding which titles to examine in more detail. A number of poorly scanned images that did not compare favorably with the print equivalent were discovered. Notes about these problems and specific examples were then shared with our Elsevier representative.As the evaluation proceeded, it became clear that we would not be able to withdraw as many print titles as anticipated. The question then became one of determining what could reasonably be withdrawn. If we were looking for equivalency with the print journals, which were themselves rarely without gaps, what level of completeness was appropriate? Would Interlibrary Loan, a free service at the UA, meet the potential need? While there was no way of knowing how frequently the missing issues or articles with poorly scanned images would be needed by our customers, we were concerned enough to be fairly conservative when setting a limit.The tiered system we decided upon was as follows:* Tier 1 - few or no problems, ready for removal: three or fewer issues missing online and accurate links to the online content in both the library's catalog and the library's A-Z electronic journal list; or AHSL print copy completely duplicates SEL print holdings and accurate links to the online content in both the library's catalog and the library's A-Z electronic journal list.* Tier 2 - some problems, delay removal, share known problems with publisher: 4-20 issues missing online.* Tier 3 - serious problems that need special attention, delay removal, share known problems with publisher: 21 or more issues missing online; or scanning problems or local cataloging issues that need to be resolved before print can be removed.We then brought our proposal of a tiered withdrawal system to the rest of our SEL colleagues. We explained the process above and drafted a letter that could be shared with science-engineering departments who might have an interest in the withdrawal of these collections. After waiting a month for feedback, we then proceeded with the withdrawal of the \"tier 1\" titles, which equated to over 15,000 bound volumes, or approximately 1,800 linear feet of material. Local holdings for titles in the \"tier 2\" and \"tier 3\" categories totaled 12,000 bound volumes, or 1,500 linear feet, and plans were made to reevaluate them in six months to see if any of the problems noted above had been addressed.There was very little concern raised by faculty about the project to withdraw these journals. One reason may be that our customers have indicated that electronic journals are the preferred mode of access, and even a short trip to the Library is becoming more of a barrier than it has been in the past. This is also not the first major journal withdrawal project for the UA Library. Print copies of back issues of Science, Proceedings of the National Academy of Sciences of the United States of America, and other major journals that are available on JSTOR had previously been withdrawn. This has perhaps accustomed users to the library's determination to rely exclusively on electronic access for journal collections whenever possible.Descriptive information and links to the ScienceDirect backfiles were then loaded into our catalog records and our A-Z list of electronic journals. To insure adequate access points for our customers, we evaluated each catalog link before considering a title for withdrawal. In some cases titles changes were handled differently in ScienceDirect than in our catalog and solutions to these problems had to be resolved on an individual basis before a copy could be withdrawn. As a side benefit of this project, the evaluation of each title meant that many local cataloging problems were addressed soon after the purchase of the backfile. In the past, these issues have sometimes only surfaced after being brought to our attention by users.We realize that by withdrawing our print copies we are in effect shifting the burden of long-term preservation on others in the Library community. We are also putting our trust in the publisher's willingness and ability to create and maintain a stable digital archive that is duplicated in more than one location. While we are comfortable with Elsevier's commitment in this regard, we also recognize that a test of the perpetual access clause in the event of cancellation could reveal unanticipated complications.We also do not know what the ultimate impact of the withdrawal will be for our customers, and there is some anecdotal evidence that seems to indicate that it could become an issue. At the reference desk, for example, one of the authors dealt with a customer who was in SEL specifically to track down a print copy of an Elsevier journal because an image in the online version was scanned with low resolution. In this case, fortunately, the title in question had not been withdrawn for this very reason. However as it was not possible to check each title as thoroughly as we would have liked, this project has inevitably resulted in the withdrawal of some content that is not adequately reproduced online.\nFuture plans: Archival collections of journals continue to be released by publishers and may provide us with additional opportunities to withdraw closed runs of journals. Wiley InterScience, for example, has numerous backfile collections and meets our standards for perpetual access and commitment to digital archiving. Initial examinations of these backfiles indicate they have considerably fewer problems with missing online issues and the quality of the scanned images is quite good. A gradual shift to electronic books is also being explored, although this seems less likely to have an immediate impact on space savings due to financial constraints.\nConclusion: This is an important shift in the understanding of collection management. It still provides appropriate content but does so in a medium that better meets users' expectations. We are meeting users' needs but allocating our resources more effectively to allow us to move in other strategic directions.While it met our immediate need to relieve crowding in the stacks, the print withdrawal project is just one stage of a longer-range plan to redesign how space is used in SEL. The recent withdrawal of a significant number of print indexes available online, for example, allowed us to take out over 400 shelves of material in the reference area and create an information commons in their place. Our gate counts have increased significantly with the addition of these new computers and preliminary evidence suggests a reversal in the decline of questions at the reference desk.By freeing up additional space, SEL may be able to further increase its value to the campus by serving as a collaborative meeting space for our faculty and students. Our central location on campus provides a convenient location and we believe this would address the needs of several departments (a more thorough needs assessment is planned in the near future). With fewer print collections, not only do we meet our own needs for managing our stacks, we also are looking to the future by repurposing space for the best possible use.\n", "role": "user" }, { "content": "Purpose: - Limited physical and financial resources and changing customer behaviors compelled the University of Arizona Science-Engineering Library to pursue more flexible collection management options, such as removing print copies of journals as the library purchased the electronic backfiles. The purpose of this paper is to describe a process used at the library to compare electronic journals to their print counterparts.\nDesign/methodology/approach: - The library's approach was to study the electronic content provided through Elsevier's ScienceDirect for completeness and quality of text and images. This was to ensure that the removal of print would minimally impact library customers while reclaiming building space that could be better utilized to meet changing customer needs.\nFindings: - The process uncovered the reality that the electronic backfiles were not always adequate substitutes for print copies. In response, it was necessary to open a dialogue with the publisher to share the library's findings that resulted in improved electronic backfiles.\nOriginality/value: - This paper weighs the advantages and disadvantages of taking a transformational approach to collection management.\n", "role": "assistant" } ]
The role of service providers in 3D printing adoption
[ "Denmark", "Germany", "Service providers", "Additive manufacturing", "3D printing adoption challenges", "Industrial firms" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Many firms across a diverse range of industries are exploring 3D Printing (3DP) for multiple applications including prototyping, tooling and component manufacturing. Conservative estimates consider 3DP to be still at an early diffusion stage, with room for further and rapid development (Anusci, 2015). However, 3DP has multiple potential advantages, i.e. decrease in lead time (Mellor et al., 2014; Chiu and Lin, 2016), reduction in transportation, logistics and inventory costs (Khajavi et al., 2014; Chen et al., 2015), weight reduction, energy consumption reduction and improvement in environmental sustainability (Petrovic et al., 2011; Chen et al., 2015; Holmstrom et al., 2017), improved flexibility and customization (Khorram Niaki and Nonino, 2017; Holmstrom et al., 2017; Deradjat and Minshall, 2017). Indeed, increasingly sophisticated 3DP technologies are expected to transform manufacturing and provide impetus to 3DP services, as well as applications of new business models (Gallouj et al., 2015; Hannibal and Knight, 2018; Jiang et al. 2017).\n2. Literature review: 2.1 Challenges in adopting 3DP\n3. Methodology and case descriptions: The exploratory interview and case study analysis approach (Remenyi et al., 1998; Dubois and Gadde, 2002) employed here identifies to what extent the participating companies faced challenges in setting up 3DP facilities, both from the perspective of firms who attempted to do so (customers) and other firms providing support services beneficial to this endeavour (service providers). Two separate interview questionnaires - one for 3DP users and another for 3DP service providers - were developed based on the research gaps identified in the literature review. These questionnaires covered issues such as what specific benefits motivated the company to adopt 3DP technologies, the associated risks and barriers in their industry, for what applications 3DP is currently being used (e.g. prototyping, tooling, spare parts manufacture, small batch run manufacture) and how this is likely to change in the future, as well as the types of 3DP services currently being used. The questions also served as a guideline to conduct semi-structured interviews with both 3DP service providers and firms that have used such services in the past. The interview partners consisted of three service providers (one in Denmark and two in Germany) and seven user firms (customers) with three in Denmark and four in Germany (see Table I). The interviews were carried out between January and May 2017. Interview sessions were recorded, analysed and subsequently shared with the interview participants.\n4. Analysis: The customers who participated in this research reported several practical challenges throughout the adoption process, i.e. during the decision phase of 3DP adoption, during adoption itself and in post-adoption applications. Table III provides an overview of the challenges faced by the user organisations, as well as the challenges that 3DP service providers perceived their customers were facing.\n5. Discussion: Our research has revealed key challenges experienced by firms in adopting 3DP. The primary objective was to identify the possible challenges these firms faced and, more specifically, how 3DP service providers can potentially assist in overcoming them. These were a lack of knowledge about the potential of the technologies, uncertainties in the future development of the technologies, materials, training shortages and equipment and limited support from service providers. Such challenges can be mitigated by service providers developing services suited to the customer needs. The companies themselves should also develop enablers such as top management support and leadership, training and knowledge sharing within the organisation, a conducive culture to experiment and work on new ideas, etc. However, we did not specifically focus on the above enablers in this research.\n6. Conclusions, implications and scope for future research: This paper uses an exploratory study of nine user organisations (customers) adopting 3DP and three 3DP service providers working towards improving the adoption of 3DP technologies amongst industrial companies. The findings reinforce the view from existing literature that the technology and business models are still immature for large-scale adoption. Companies planning to adopt 3DP face challenges on many fronts including lack of ease of use, technological turbulence, lack of resources and training. The study highlights the important role service providers can play in overcoming the above challenges. As Rogers et al. (2016) indicated, companies need to better understand how and to what extent to integrate service providers into their supply chains. This study furthermore identifies specific challenges faced by user organisations during the pre-adoption, adoption and post-adoption phases of 3DP. It also points out that service providers do not have a full understanding of all the challenges faced by their customers. Service providers can however play a critical role by helping customers in various ways: These include choosing the right technology, equipment and processes, and also by making them aware of future developments and by allowing them to experiment with the technology as well as training (Autry et al., 2010). In a study on the impact of 3DP on the hearing aid industry, Sandstrom (2016) noted that the initial technological uncertainty made it difficult to make the right decision concerning what technology to use, which is usually the case when a materially different technology is introduced (Utterback, 1994). Thus, firms should monitor the technology closely before the emergence of a dominant design and that they build an absorptive capacity (Cohen and Levinthal, 1990). It is not surprising that firms have raised similar concerns here.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to identify challenges faced by industrial firms at different phases of adoption of 3D printing (3DP), and outline how 3DP service providers can help address these challenges.\nDesign/methodology/approach: Separate interview questionnaires for 3DP users and 3DP service providers were used to conduct semi-structured interviews.\nFindings: The key 3DP adoption challenges are as follows: creating a business case; difficulty in using different materials; optimising the process for specific parts; lack of \"plug and play\" solutions offered by equipment manufacturers; limited availability of training and educational support; poor end product quality; machine breakdowns; and high cost of maintenance and spare components. Using the theoretical lens of the technology acceptance model, results show a lack of ease of use and technological turbulence impact companies' decisions to adopt 3DP. 3DP service providers can indeed attempt to alleviate the above challenges faced by customers through providing multiple 3DP services across different stages of adoption.\nResearch limitations/implications: Future research should examine the role of 3DP equipment manufacturers and design and modeling software solutions providers in improving adoption and how 3DP equipment manufacturers could develop into more integrated service providers as the technology advances.\nPractical implications: Service providers can help customers transition to 3DP and should develop a portfolio of services that fits different phases of adoption.\nOriginality/value: The paper outlines how 3DP service providers can help address customer challenges in adoption of 3DP across different stages of adoption.\n", "role": "assistant" } ]
Factors influencing logistics cost and service quality: a survey within the Indian steel sector
[ "Logistics cost", "Logistics process and cost elements", "Service quality" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Globally, the elements of logistics are very critical in deciding the competitiveness of steel sector; typically for steel manufacturing, inbound logistics is more critical as compared to outbound logistics both in terms of volumes and costs. The profitability of Indian steel sector depends entirely on how inbound and outbound logistics are managed.\n2. Key concepts: 2.1. Logistics management in Indian steel industry\n3. Research objectives: The primary objective of this paper is to investigate the influence of the dependent variables, service quality via logistics cost as an independent variable in steel sector in India. Therefore the following hypothesis is framed:\n4. Materials and methods: This study was conducted in two phases. The first phase focused on qualitative research, and the second phase focused on a quantitative research approach. The focus group assisted the researchers in developing the questionnaire and provided the desired information on various elements of logistics cost and quality of service from the viewpoint of steel manufacturing industries in India. The questionnaire was used during the quantitative phase of research. The reason for quantitative data is that it is easy to interpret the results in simple conclusion (Zikmund et al., 2013). Taking into consideration, the nature of the research, the study was descriptive. A structured questionnaire survey was used to collect data for this research, and this process was administered by conducting personal interviews. The questionnaire included self-developed items, as well as items from questionnaires used in previous researches.\n5. Multiple regression analysis results for logistics cost and service quality: Multiple regression analysis was performed to assess the relationship between the independent variable, i.e. elements of logistics cost and the dependent variable, service quality. The results are reflected in Table V. It clearly indicates that the independent variable positively influenced the dependent variable. The relationship between logistics cost elements and service quality is significant at p=0.000. This relationship implied that if the elements of logistics cost successfully maintains relationships with its customers by way of providing on time information, data and documents, service quality at customers end will definitely increase. Table V indicates that the independent variable in the multiple regression analysis explained 81 per cent of the variance (R) in the dependent variable.\n6. Discussions and implications: An attempt is made to understand the current world oil prices (Figure 5), market interest rates (Figure 6) and the currency exchange rate (Figure 7) keeping USD as base.\n7. Relevance of findings for training and development: The empirical results imply that logistics management can improve service quality and maintain its relationship between the steel industry and the need of customer on more efficiently and accurately. Furthermore it is clear that the elements of logistics process further to be strengthened on step-wise and there should be close monitoring mechanism in order to further improve the relationship of quality.\n8. Concluding remarks: This study attempted the important area of logistics management in Indian steel sector. Logistics management for the steel sector in India gains high importance as key steel manufacturing organizations are situated far away from ports and majority of the raw material is imported. Efficient logistics management can not only help in reducing inventories, but also in gaining responsiveness. The findings of this research are very useful for both academia and professionals.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to understand the role of economic and social factors influencing the logistics cost for the Indian steel sector and its relationship with the service quality.\nDesign/methodology/approach: A structured questionnaire survey is used for the data collection in this research. The questionnaire included different elements of logistics cost on inbound and outbound logistics, and service quality.\nFindings: This research identified various significant elements of logistics cost for Indian steel sector and observed its relationship with transportation cost, warehousing cost and administrative cost. The findings of this study showed positive relationship between logistics cost and service quality.\nResearch limitations/implications: The data for this study is cross-sectional; it is recommended that future studies can gather it longitudinally. This will help in gaining deeper understanding of the cause and effect relationship among the variables. The findings of this study are very useful in identifying the suitable training directions.\nPractical implications: Logistics is very critical factor in deciding the competitiveness of steel sector globally. Particularly for steel manufacturing the inbound logistics is more critical as compared to outbound logistics both in terms of volumes and costs.\nSocial implications: For the Indian steel manufacturing companies, the logistics cost is very high as compared to other developed countries. Total logistics cost on inbound movements taken into consideration is around 16-18 per cent of material turn-over whereas; the world average is around 7-8 per cent.\nOriginality/value: To the best knowledge of the authors, this study is the first attempt to study the relationship of logistics cost elements on the service quality in the context of Indian steel manufacturing sector. The logistics cost in steel industry is considered as one of the top performance measurement of the entrepreneur in this research.\n", "role": "assistant" } ]
Capital statement analysis as a tool to detect tax evasion
[ "Tax fraud", "Case study", "Tax evasion", "Capital statement analysis", "Tax investigation" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: In Malaysia, under the self-assessment system (SAS), taxpayers are encouraged to voluntarily comply with the tax law and be responsible for their own tax returns. Thus, the taxpayer has more freedom in reporting his or her income because the Inland Revenue Board of Malaysia (IRBM) accepts in good faith any report submitted to it (Juahir et al., 2010) such as via the electronic tax form submission system or known as e-filing (Saibon et al., 2016). The essence of the implementation of the SAS is that taxpayers need to know their true responsibility and have good understanding of the tax law. Failure to comply with any of the rules and guidelines under Income Tax Law 1967 will be a burden to the taxpayer. It is anticipated that SAS will create a situation, whereby taxpayers will submit their tax information honestly and voluntarily (Marshall et al., 1997).\nLiterature review: Capital statement analysis as a method of proving under-reported income\nResearch methodology: The research design of this study is focused on real tax investigation cases that will be used as case studies for the detailed analysis. The case study approach was selected here because the information that can be gathered and collected to achieve the purpose of this study is limited and highly sensitive. The access is only permitted with a special request from the top management of the investigation body under research. The information is also confidential and not publicly available.\nFindings and discussion: The findings, after examining all the related documents and files of all the five cases, were discussed as follows:\nConclusions and implications: The objectives of this research are mainly to examine the effectiveness of capital statements in detecting tax evasion and determining factors that contribute to that effectiveness. Based on the case studies, it can be concluded that the capital statement analysis is an effective tool in detecting under-reported income and tax evasion. It helps the revenue authority to recover millions that are potentially lost due to the tax-evasion activities. However, there is a limitation in the data-gathering process that could delay the settlement of the case. The cooperation from the taxpayer itself is the most important factor, because if the taxpayer seems to be non-cooperative (which is always the case), the IO needs to find other people, i.e. third party, as a source of information. The weakness in the data-gathering process would cause failure in the course of the investigation. The hidden income will remain undetected and create loopholes in tax enforcement.\nLimitations and suggestions for future research: This study suffers from several limitations that can be improved in future research. First, the research only focuses on five cases of actual tax investigation. In the future, more cases should be included to obtain more accurate and robust results. Second, the study also only discusses certain issues that are covered in the case study. For future research, the data could be analysed via advanced statistical analysis package to obtain more robust findings of the study. Finally, the cases are selected from only one particular area in the main capital city of Malaysia, namely, Klang Valley. Thus, the findings may not be generalized and reflected to other taxpayer investigation locations in less-developed and rural areas in which the tax-evasion activities possibly differ. Future research can be conducted and compared whether there are any significant differences between tax evasion committed by taxpayers located in different demographic profiles.\n", "role": "user" }, { "content": "Purpose: The purpose of this study is to determine the suitability of capital statement analysis in assisting tax investigation to combat tax evasion, measured by the time taken in proving the under-declared income by the tax evader. A weakness in the investigation process that may contribute toward the delay of the tax investigation completion was examined.\nDesign/methodology/approach: Five investigation cases were randomly selected from the tax investigation organization for detailed and in-depth analysis on the whole process of reconstruction of capital statement analysis. Document analysis technique was used to analyse the data.\nFindings: This study found that the capital statement analysis can be an effective tool in detecting under-reported income and tax evasion. However, the cooperation from the taxpayer is the most important factor, because if taxpayers do not cooperate, the investigation officer needs to find other informant such as third party as a source of information which usually time-consuming.\nResearch limitations/implications: This paper selected only a small number of tax fraud cases for examination. Many other cases were not accessible due to confidentiality and considered as high-profile cases.\nPractical implications: The outcome of this paper contributes in the way it can be used and applied by the revenue authority in implementing more practical and effective capital statement analysis technique to deter tax evasion. The investigation activities can be improved so that more cases can be covered in shorter time period.\nOriginality/value: The paper is novel and original, as it focuses on the investigation of tax fraud cases' which is difficult to access and rare in tax literature, particularly in emerging markets. The findings of this study are inferred from direct examination of the actual cases documents that are private and confidential.\n", "role": "assistant" } ]
A collaboration of school administrators and a university faculty to advance school administrator practices using appreciative inquiry
[ "Appreciative inquiry", "Narrative", "Reflection", "Storytelling", "Administrative practice", "Craft knowledge", "School administrator", "School leadership" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: What if school administrators used observation of successful practices/events in their schools to advance educational practices and human performance? This overarching question was the guiding focus of an appreciative inquiry (AI) study with 11 school administrators in a large, highly diverse suburban school district. This research was conducted in a setting designed to foster university-school district collaboration to identify and support school administrators in improving and advancing school administrative practice. Examining successful practices/events is an emerging paradigm in organizational studies that seeks to understand and describe the practices and elements that interact when organizations and their members operate at high levels of reflective performance (Boyd and Bright, 2007).\nTheoretical framework: This study is grounded in AI. AI is a theoretical research perspective and change methodology. In this study, it was used in both forms. As a theoretical research perspective, AI focusses inquiry on the observation and sharing of successful practices/events in a school setting. The sharing of the successful practices/events in a whole group setting facilitated the creation of a climate where inquiry and change occurred simultaneously (Schein, 1987). Since AI has its initial starting place in the discovery of \"what works,\" it is asset-based. As an asset-based research method, the AI-driven inquiry created an organic context where inquiry into successful practices/events change through the collective discovery of what has worked, continues to work, and has the potential to evolve into a more effective model/strategy. Often, the organic context of an organization is not recognized. It is viewed as sterile and fixed. AI presents a different view. It views organizations as comprised of people who constantly create and recreate the organization. Like any other organic system, when it fails to adapt to its context, it becomes extinct. Through the organization's members' constant creation and recreation of the organization, AI, as a change methodology, facilitates collaborative and cooperative efforts among organizational members creating a healthier, sustainable organization. In doing so, AI proposes the individual or organization is doing something right, it is discoverable, and those involved are the experts who contribute to making themselves, or the organization, better (Cunningham et al., 2005).\nMethods: A qualitative case study with an AI theoretical research perspective was used to collect data in this study. The unit of analysis was 11 full-time school administrators (seven men and four women) in a large suburban school district. The study's participants were administrators (three elementary and two high school principals; three assistant high school principals; and three district wide administrators). The study occurred over a ten-week period and took place at a district high school. In addition to each AI-driven session over the ten-week period, administrators were asked to provide narratives of successful school-related practices/events they observed or participated in during the previous week. Their observation/participation in these real time events was guided by the following question: How can I make more of this (successful practices/events) happen?\nData source: Data were collected from the written reflective narratives of the administrators, whole group discussions, newsprint generated documents summarizing whole group discussions, and observer field notes. The field notes contained observations of behavior and verbatim salient comments from the administrators. A content analysis was performed using an open and axial coding process embedded in TAPoR an online software text analysis program. TAPoR is the acronym for Text Analysis Portal for Research. The application of this text analysis tool facilitates the open and axial coding process. This process facilitates the discovery of the linkages between words and concepts, providing a quantitative assessment of the number of such linkages. This deeper analysis provided the researchers with a high level of confidence in the credibility of the findings. Data were analyzed using three different TAPoR text analysis tools (TaPoR Team, 2012): Voyant links, Voyant Cirrus, and TextArc.\nFindings: AI, like positive psychology, seeks to recognize and nurture factors advancing the health and sustainability of people and their communities, and organizations (Cooperrider, 1999; Fredrickson, 2001). As such it connects to the deep human need to acquire and offer understandings for present and past events (Keil, 2006). In AI, the acquisition and understanding of past and present events applies a lens that seeks to discover the history of successful actions and events in people, the communities, and organizations. Once it is discovered, people begin to make sense of it in seeking to understand what happened, how it happened, and why it happened.\nDiscussion: This study has implications for school administration praxis. The focus of AI inquiry on successful practices/events demonstrates how school administrators can apply the principles embedded in storytelling with the varied stakeholders in their organizations. Imagine the organization as a large circle, and each group of stakeholders as a subset of stakeholders within that circle. The typical school principal will work with subsets of teachers, students, parents, staff, and affiliated organizations (see Figure 3). Moreover, there is a direct connection to other subsets that rotate around the large circle, much like the moon does around Earth. Among these subsets are police, political agencies, social agencies, and professional organizations.\n", "role": "user" }, { "content": "Purpose: An appreciative inquiry (AI) collaborative study with 11 school administrators in a highly diverse suburban school district sought to understand if observing and sharing successful school practices/events in a whole group setting led to change in their perceptions, attitudes, and administrative practice. The paper aims to discuss these issues.\nDesign/methodology/approach: The study took place over a ten-week period with a group of self-selected (voluntary participation) school administrative participants.\nFindings: There were two findings: the AI focus of inquiry on successful practices/events shapes school administrator perceptions, attitudes, and application of craft knowledge to practice; and the school administrators' sharing of successful practices/events in a whole group setting generated new forms of practice during the ten-week study.\nOriginality/value: The findings suggest that reflection and integration of successful practices/events are a source of craft knowledge advancing changes in school administrative practice.\n", "role": "assistant" } ]