title
stringlengths
6
301
keywords
listlengths
0
23
messages
listlengths
2
2
Do credit card redemption reward programs work in China? An empirical study
[ "Consumer behaviour", "Customer loyalty", "Credit cards" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\n1. Customer loyalty programs: As one of the most successful marketing methods during the 1990s, customer loyalty programs have received a vast amount of coverage, from within academia and trade publications. Loyal customers are considered an asset to the business as relative costs of customer retention are substantially lower than those of acquisition (Fornell and Wernerfelt, 1987). Consumer loyalty is considered an important key to organizational success and profit (Oliver, 1997), and practitioners increasingly refer to the \"lifetime value\" of customers as a form of equity.The first loyalty programs appear to have been the \"trading stamp programs\", popular in parts of the USA during the late 1950s and 1960s. Typically customers received one stamp for every 10 cents of purchase, and could redeem their stamps for merchandise at specially established redemption centers. Trading stamps became so popular in some areas that retailers competed by offering three- and four-times the normal award of stamps (Liu and Brock, 2009). It was not unusual to see automobile glove compartments and kitchen drawers stuffed with trading stamps. In the early 1980s, airlines added frequent flier programs, with points accumulated on the basis of miles flown and redeemable for free flights, (Wansink, 2003). Frequent flier programs have been in existence for over 25 years and have dramatically changed the pricing and marketing programs of what is essentially a commodity business (Varien, 1999). During this same timeframe, the popularity of loyalty programs grew in other industries as well, such as hotels, car rentals, department stores, and credit cards. In brief, customer loyalty programs focus on the firm's existing customer base with the intention of building lasting relationships as well as to strengthening commitment and creating \"velvet handcuffs\" that bond the customer to the brand (Dowling and Uncles, 1997). Some key loyalty-marketing trends have been identified and explored in detail (Capizzi and Ferguson, 2005).\n2. Literature review: 2.1. Research background\n3. Research method and questions: Between May and August, 2007; a two-phase study was conducted. The first phase consisted of exploratory focus group discussions, while the second involved over 400 personal interviews with credit card users in China. All focus group discussions and survey interviews were conducted by doctoral students in management who received formal training in research methods and interviewing skills at either Peking University (in Beijing) or Fudan University (in Shanghai) in China. In an effort to avoid possible confusion between bank cards, debit cards, and credit cards, interviewers were trained about the differences between the three, and in turn, ensured that respondents - in both phases of the study - clearly understood what a credit card is.For the first phase of the study, three focus discussions were held, exploring customers' awareness levels of credit card loyalty programs and factors affecting their usage. Participants were credit card holders in Beijing, Shanghai, and Guangzhou, and most were either graduate students or young salaried workers. Questions for the groups were related to attributes of credit card point-based reward programs and participant behaviors toward and attitudes about them, such as the length of credit card use, average usage rates, preferences for making purchases using credit cards versus cash and the reasons behind such preferences, awareness of additional features of and reward programs available with their cards, redemption of reward points, reasons behind not redeeming reward points, and general attitudes toward credit card incentive programs[5]. Participants knowledgeable of reward-point programs of their cards were more open to discuss their own experience of point redemption. A typical example was a corporate officer who was enthusiastic about the different methods she used in redeeming points[6]:Ms Li (company officer): Redeeming tangible gifts is the most fundamental level (laugh). There are lots of ways to utilize reward points. For example, we could redeem some in airport to use the VIP room to enjoy relaxed waiting time. (You can use points) to park your car in the parking area of department stores when you go shopping; by the way, using a specific card in appointed stores could obtain double points! I redeem my points in many ways, and it's really convenient, why not?Two important factors to consider are customers who rapidly and frequently use their credit card to obtain points, as well as the value of the redeemed gift:Mr Yu (business consultant): I, and my colleagues, have business trips every two weeks on average, we usually buy a plane ticket; also we need to treat clients - or ourselves when we work overtime- even more frequently. We are used to paying with a credit card in advance, and actually paying the invoice at the end of the month. But I could submit the bills [and be reimbursed] immediately at my company, so it's great to earn some interest gains, why not? Also, it's inconvenient to bring much cash when traveling, so a credit card is good for us. We mostly will choose famous restaurants, and most of them accept credit cards. I can earn interest, reward points, and convenience at same time.Ms Wang (graduate student): I can't use credit card in many places. I always pay cash in restaurants and supermarkets where I usually visit. I don't have much time and money to go shopping in those luxury places. I don't use my card often though I have one.Mr Liu (public servant): Give me a break, why do I need to spend much money to collect points to redeem a cup [an inexpensive gift]? A cup only costs me 5 dollars if I go to buy one directly. On the contrary, if I want to obtain some good things like a mobile phone, a digital camera, or a vacuum cleaner, my consumption is not nearly enough (to redeem reward points)! So would you please tell me what on earth the meaning of redemption is?The attitude of some participants toward point-reward program incentives seemed very positive:Ms Zhang (graduate student): Oh of course it's great to get some gifts by redeeming points. Those things, like slippers, cup, towel, puzzle etc, are good and useful. It's not hard to collect such qualified points, so I think I am willing to collect and redeem the bonus points to get the gifts...now I am considering redeeming my points for a Hello Kitty toaster, it's really cute! (laugh) [Many other participants said \"yes\", \"agree\", and \"sure\" noisily]Mr Zhu (college lecturer): There are many choices for redeeming your points. Most of them are good. Although I may not have enough points lately, I will keep collecting points when I use my credit card. It's free, isn't it?Participants with little knowledge about reward point programs were unlikely to redeem points:Mr Zhang (public servant): I knew they [banks] offer it [reward point programs], but I don't know how many points should be collected to redeem gifts...... is it difficult?...... What do they offer exactly? I have no idea...How do I redeem? What's the process?Ms Xu (business owner): To redeem points for gifts? Come on, it's time-consuming to get it [a gift] in the bank, it's not worthwhile to do it because time is money, I'm not willing to visit the bank for only a tiny gift though I could, it's too bothersome.Discussion took place regarding the duration of card usage, quantity of cards in possession and their usage, and their point expiration periods. These focus group interviews resulted in valuable inputs in terms of identifying important variables to incorporate into the subsequent interviews, and in other aspects related to preparation of the survey instrument for the next stage of the research[7].During the research's second phase, an extensive telephone survey was conducted, with survey responses collected through a random sampling of telephone subscribers in the 2006 China Telecom directories, and from convenience samples of business people (in order to increase both their representation and the response rate) drawn from the 2006 China Telecom yellow books, in five major cities (Beijing, Shanghai, Guangzhou, Chongqing, Shenzhen). These five cities are an excellent representation of the urban population of China, and both telephone subscribers and business people in developing countries are generally better educated and earn higher incomes than the population at large. Through utilizing this method, it provides a population more likely to be credit card users than the general Chinese population. Pre-screening of participants were ensured that respondents were over the age of 20 and held at least one credit card. All credit cards in China at the time featured reward programs. The research sought to answer the following questions:1. Do customers who have used their cards longer have a greater awareness of point-reward programs?2. Do customers who are more aware of their point-reward program redeem their points at higher rates?3. Do customers who hold more positive attitudes toward their point-reward program redeem their points at higher rates?4. Do customers who use their credit cards more redeem their points at higher rates?\n4. Research findings: 4.1. The Chinese credit card user\n5. Conclusion and suggestions: 5.1. Research implications\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to investigate how a redemption reward program of credit card works in China. The study also seeks to assess Chinese credit card users' level of awareness about the reward-point programs. Additionally, the factors affecting the redemption of reward points, one customer loyalty program used by many credit card issuers worldwide, are also identified.\nDesign/methodology/approach: - A telephone survey and a followed regression analysis were conducted in five major cities in China with more than 400 samples.\nFindings: - Lack of awareness about the reward points programs and the redemption procedure was shown. Low customer awareness about point-based reward programs and, not surprisingly, low redemption rates were also discovered. The important conditions positively affecting redemption of reward points were found to include awareness of the reward point program, a positive attitude about the incentives, and average frequency of credit card usage. Also, redemption rates were higher for cards that had been issued more recently. Moreover, redemption behaviors and credit card usage habits may be changed in time of financial crisis.\nPractical implications: - Awareness of reward points programs, redemption procedure, and benefits of redemption of reward points should be increased by better and more effective communication strategy, provided by credit card marketers, to customers or potential customers.\nOriginality/value: - The lower-middle class market segment in China is studied to provide background information regarding behavior towards credit card reward programs in the banking industry. Additional information is provided towards marketing managers who hold the responsibility of promoting their usage and rewards.\n", "role": "assistant" } ]
When and why leaders trust followers: LMX as a mediator and empowerment as a moderator of the trustworthiness-trust relationship
[ "Empowerment", "Trust", "Trustworthiness", "Leader–member exchange (LMX)" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The people, when rightly and fully trusted, will return the trust.(Abraham Lincoln)\nTheory and hypotheses development: Trustworthiness and trust\nMethods: Study design and procedure\nResults: Descriptive statistics and CFA\nDiscussion: Perception of trustworthiness is regarded as a key antecedent of trust, such that the perceiver trusts a target perceived as trustworthy (Colquitt et al., 2007). Our research extends this stream of research by showing that follower perceptions of leader trustworthiness are not only related to followers' trust in the leader, but also elicit leaders' trust in the follower. Returning to our opening quote, perceiving leaders as trustworthy and forging a relationship based on such perceptions indeed results in leaders trusting followers. As expected, follower perceptions of leader trustworthiness were related to follower perceptions of LMX quality and this positive relationship was amplified when followers also felt empowered. Additionally, followers' perceptions of the quality of LMX mediated the relationship between followers' perceptions of leader trustworthiness and leaders' trust in the follower. Thus, trust develops when followers perceive high-quality LMX relationships with their leaders, and empowerment amplifies this relationship. In essence, leaders trust followers when followers perceive high quality LMX relationships and interact with them accordingly. In terms of the when question, our research suggests that followers' perceptions of leader trustworthiness and empowerment interact to cause followers to perceive a high-quality LMX relationship with their leaders, which then elicits trust from leaders.\nConclusion: These findings demonstrate the critical mediating role of LMX in the development of mutual trust. Our findings suggest that to increase leaders' trust in their followers, organizations must create policies and practices that encourage cooperation and reciprocity between leaders and their followers. Employee perceptions of empowerment contribute to closer and higher quality relationships with their supervisor.\n", "role": "user" }, { "content": "Purpose: Followers' perceptions of leader trustworthiness affect their trust in the leader (Colquitt et al., 2007). However, because positive benefits of trust generally accrue when trust is reciprocated, examining when and why followers' perceptions of leader trustworthiness elicit leader's trust in followers may provide heuristic and practical value. Therefore, the purpose of this paper is to examine if followers' perceptions of leader trustworthiness elicit leaders' trust in followers, casting follower's perceptions of leader-member exchange (LMX) quality as a mediator and their perceptions of empowerment as moderating this mediated relationship.\nDesign/methodology/approach: Followers' perception of leader trustworthiness was measured at Time 1, followers' perceptions of empowerment and LMX were measured at Time 2 and leaders' trust in followers was measured in Time 3. The authors tested the research model with data collected from 347 leader-follower dyads using the three time-lagged surveys as noted above.\nFindings: Followers' perceptions of leader trustworthiness and perceptions of empowerment interacted to influence followers' perceptions of LMX quality, which in turn influenced leaders' trust in followers. Thus, LMX mediated the trustworthiness-trust relationship and this mediated relationship became stronger at increasing levels of empowerment.\nPractical implications: Being trusted by leaders is beneficial to followers. Training managers in behaviors that elicit employees' perceptions of manager trustworthiness has the potential to accrue benefits to employees, managers and the organization.\nOriginality/value: This study is the first to demonstrate that followers' perception of leader trustworthiness resulted in leaders trusting followers. In addition, the results support a mediating role for LMX and a moderating role for empowerment in the development of leader trust in the follower. Understanding when and why leaders trust followers offers important insights about development of trust between followers and leaders. The authors discuss implications for theory and practice.\n", "role": "assistant" } ]
Research on the influence of balance patch on players' character preference
[ "MOBA game", "Balance patch", "Character preference", "Psychological distance", "Character selection pattern" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: With the continuous upgrading of terminal hardware and the improvement of game quality, video and computer games have become more popular, profitable and competitive than ever (Seo et al., 2015; Xia et al., 2019). In a report released by Superdata (2018), League of Legends (LoL, Riot Games) was the most profitable game with over $2 billion generated in 2017. LoL is a multi-player online battle arena (MOBA) game where players are matched, based on skill, into two opposing teams with five players on each side. In order to destroy the opposing team's headquarters, each player chooses one out of different characters, called champions, to control during the game (Claypool et al., 2017). In this case, game balance is an important element to ensure fair competition and attract more players to enjoy the MOBA games (Kohana et al., 2014). Many modern computer games use software patching to adjust game balance and add content to the game, which effectively alters the gameplay for current players (Silva et al., 2017). Adding game content and changing game balance is useful to make the game fun and engaging (Altimira et al., 2017; Claypool et al., 2015). However, studies regarding the effects of patching on the player preference were lacking.In recent years, the developers deliberately intend to patch the game with both bug fixes to existing problems as well as add additional content as the player base grows. While bug fixes and technology improvements are likely welcomed by nearly all players, new game content and balance changes are different (Claypool et al., 2017). This ubiquitous use of updates by software vendors in practice reflects in a large body of research on how and when software functionality should be developed and delivered in order to maintain the technical integrity of the software and optimize the vendor's production process (Svahnberg et al., 2010). For example, systems-type research has used data traces to analyze the effectiveness of different strategies to deliver software patches (Gkantsidis et al., 2006). While security patches of games are often applied to fix code so that known \"cheats\" (exploits that players uncover to gain an unfair advantage) are no longer possible (Yeung and Lui, 2008), such patches do not typically enhance the patched software with new features. While this stream of research does account for player needs, its primary focus lies on the supply side, exploring technical design aspects of game. The behavioral dimension, i.e., how patches are perceived by users is still an under-explored area (Hong et al., 2011; Fleischmann et al., 2016). There are two basic types of game patches: feature patches and non-feature patches. Feature patches change the core functionality of software, which are deliberately employed by the user in accomplishing the task. In contrast to feature patches, technical non-feature patches do not change the core functionality of software but only correct flaws (e.g. bug fixes) or change properties that are not directly related to its core functionality (e.g. improvements in stability, security or performance). Because non-feature patches do not increase users' continuance intentions (Fleischmann et al., 2016), the article explores the impact of feature patches on player preference.In this paper, the feature patches are referred to as balance patches, which include buff patches, nerf patches and neutral patches (Claypool et al., 2017). While a buff patch increases the strength of a champion, a nerf patch decreases the strength of a champion. In the MOBA games, some characters are too powerful or too weak, which make it more likely that the side with the dominant characters will win. In each version, manufacturers regularly issue balance patches to weaken the characters that are too powerful and enhance the characters that are too weak. Game balancing could help maintain fairness (Kohana et al., 2014), and improve the players' experience (Wang, 2019) and thus enhance player engagement (Altimira et al., 2017; Liu et al., 2019). Moreover, psychological distance can reduce the intensity of negative feelings (Ayduk and Kross, 2008; Kross et al., 2005) and increasing psychological distance between the self and complex tasks can reduce feeling of difficulty (Thomas and Tsai, 2012). Meanwhile, some players like switching characters frequently, while others are usually faithful to some characters (Huang et al., 2017; Liao et al., 2019). Thus, our study aimed to examine the effects of balance patches (buff patches vs nerf patches) on players' preference and the moderating effects of psychological distance and character selection pattern.We conducted two studies to examine the effects of balance patches on character preference in MOBA games. In study 1, we test the main hypothesis that balance patches result in changes of players' character preference and examines the role of psychological distance and character selection pattern as moderators. Study 2 confirms the results obtained in Study 1 in a situational experiment and further tests the proposed underlying mechanism by directly manipulating patch adjustment perception.\n2. Theoretical basis and research hypothesis: 2.1 Stimulus-organism-response model\n3. Study 1: League of Legends (LoL) provided an unprecedented research opportunity due to the combination of two factors: the huge number of players and how easy it is to access part of the generated and recorded in-game data (Mora-Cantallops and Sicilia, 2018). Application programming interfaces (APIs) of LoL (functions allowing the creation of applications that access such applications' data and retrieve it) were available and officially supported. Therefore, we tested the assumptions through empirical data analysis in Study 1. 3.1 Method\n4. Study 2: Study 2 adopted a 2 (patch type: buff patch vs nerf patch) x 2 (psychological distance: near vs distant) x 2 (character selection pattern: stable vs variable) between-subjects design. The study attempted to confirm the results obtained in Study 1 and further explored the mediating effect of patch adjustment perception between patch type and character preference. 4.1 Method\n5. Discussion: We used two studies to examine the effects of balance patch and moderating effects of psychological distance and character selection pattern on character preference. The results showed that the buff patch will increase players' character preference, while the nerf patch will decrease players' character preference. Psychological distance and character selection pattern mitigated the decrease effect of character preference caused by nerf patches. 5.1 Theoretical contributions\n", "role": "user" }, { "content": "Purpose: The balance patch is an important but not well studied area to maintain game fairness and improve player entertainment. In this paper, we examine the effect of balance patch on player's character preference and further explore the moderating effect of psychological distance and character selection pattern.\nDesign/methodology/approach: In study 1, a web crawler was used to get server-side data of 40, 974 multi-player online battle arena (MOBA) players through official application programming interfaces (APIs). A paired-T test and a stepwise regression were performed to verify the hypothesis. In study 2, a 2-patch type (buff vs nerf) x 2 psychological distance (near vs distant) x 2 character selection pattern (stable vs variable) between-subjects design was adopted to confirm the empirical conclusions through questionnaire survey design and further explored the mediating effect of patch adjustment perception.\nFindings: The analyzed results showed that the buff patch led to an increase in players' character preference, while the nerf patch led to a decrease in players' character preference. Moreover, the main effect was mediated by patch adjustment perception. Furthermore, psychological distance and character selection pattern both moderated the relationship between balance patch and character preference changes. The character preference of the near psychological distance increased more significantly elicited by buff patches and decreased more significantly in an adverse situation. Similarly, players with variable selection pattern of characters were more sensitive to the stimuli, and the character preference of the variable group changed more significantly than that of the stable group caused by balance patch release.\nOriginality/value: This paper studies the influence of a patch on the balance of character strength on player preference, which expands the research on game balance and fairness. The present results contribute to the theoretical research on consumer behavior of psychological distance and character selection pattern elicited by balance patches. Meanwhile, the results indicate that psychological distance theory can apply to the study of the relationship between players and virtual characters.\n", "role": "assistant" } ]
Agent-based monitoring using case-based reasoning for experience reuse and improved quality
[ "Decision support systems", "Condition monitoring", "Intelligent agents", "Quality improvement", "Robotics" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Society has moved from the industrial age to the knowledge age, where information and experience are one of the most valuable resources. This is also becoming increasingly obvious in production industry, how well a production company reuse experience and integrate new knowledge and research results into their production may both be one of the key factors for success as well as a major long-term survival factor. Research is rapidly moving forward in the area of condition monitoring and diagnostics (Mba and Rao, 2005), but production companies not following progress in research and integrating research results into their production when results are ripe for commercial deployments take a serious risk. A large part of a company's knowledge and experience is also produced by human mistakes and often costly incidents. When engineers need to make a decision to prevent or correct, they often encounter the information overload problem (Mangina, 2005). Human information overload is also a serious issue, especially when quality of information is difficult to judge, faulty, or missing. Although valuable experience in the same or similar situation may be available from a phone call away, such information is rarely found when needed. Bengtsson et al. (2007) states that \"experience reuse, in even the simplest form of, e.g. checklists, and subjective monitoring\" is helpful. It is increasingly difficult to meet customers' demands, requiring increased production quality, flexibility, reliability and fast delivery times is a major challenge for industry. These requirements make product development and manufacturing increasingly complex.In this research, we explore how methods and techniques from artificial intelligence can be used to enable systematize in production industry. Large companies have reduced their technical support costs with up to 33 per cent by deploying methods and techniques from artificial intelligence (Cheetham, 2006).In this study, we focus on agents-based technology and case-based reasoning (CBR). Intelligent agents offers a concept and framework enabling flexible interactions with agents, systems and humans. An agent may also have knowledge on what the engineers currently are working on and what knowledge they have in order to offer better support and identify relevant knowledge and experience, e.g. past solved cases, documentations, preventive maintenances (PMs), etc.CBR is an efficient method to identify past experience in the form of cases that may help to solve the current problem. Intelligent agents deploying CBR enable the agents to gain experience by collecting past solved cases, adapt them to current problem and context, e.g. the experience level of the technician. By identifying similar situations, transfer relevant information and experience, and adapt these cases to the current situation will both transfer knowledge and help this decision process. Some decisions can be made autonomously by the agent in critical situations if no technician is close by. CBR solutions may also reduce costs for technical support since technicians often need considerably less time the second time they encounter a similar problem. But even access to the case containing the solution to a similar problem saves often considerable time according to the technicians.Using intelligent agents for monitoring is an important path to the next generation of monitoring systems (Mangina, 2005). We suggest that the following reasons are valuable properties in an agent-based approach for condition monitoring, agents:* enable decentralization of decisions and reduction of complexity;* enable localized expertise and experience reuse;* enable learning and experience sharing between agents with similar tasks;* can be implemented to collaborate with other agents and humans; and* are able to make informed decisions on what actions to take or when to verify actions or interact with humans.There are already research prototypes demonstrating the value of agents-based monitoring systems in industrial applications (Pirttioja et al., 2005).\nIntelligent agents: In this section, we give a brief introduction to agents and also motivate why the agent concept is suitable for the domain of condition monitoring. There are a number of different definitions of agents, but the difference for practical applications are often less important, the definitions are often based on the fact that agent research often is application oriented and having a particular applications in mind may lead to slightly different definitions (Franklin and Graesser, 1997). Russel and Norvig (1995) state that \"an agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors\". A pure reactive system like a thermostat would meet this basic definition, but most definition also includes social skills (communication) and some \"intelligent\" behaviour, e.g. ability to pro-activeness, learning and predicting. An agent's goals or desires may be to decrease maintenance costs without reducing reliability and life expectancy. Wooldridge and Jennings (1995) define agents to be computer systems that have properties such as:* autonomy;* social abilities; and* reactivity and pro-activeness.In the context of condition monitoring, the agent perceives the environment through one or more sensors. Additional information about the environment may also be acquired through communication with other agents or systems (a system may be given an agent wrapper to enable uniform communication). An agent's ability to influence its environment may in the context of condition monitoring be to operate a valve/switch or adjust a process. An action may also be to communicate with some other agents or human, e.g. a technician close by and ask for help to carry out some preventive or corrective needs.The maintenance agent\nFactors affecting decisions by agents: In an agent-based approach, a critical issue is the agent's decision whether to take a certain action autonomously or to collaborate with humans (technician/economist) in problem solving. For this decision, a number of factors have to be considered. Some of them are domain dependent and also dependent on the reasoning method applied by the agent, since different methods and techniques enable more or less informed decision making. CBR systems have some desirable properties enabling an informed decision. These systems make reasoning in terms of similarity, confidence and usefulness. These are important for agents as well as humans during a decision-making process. The most central measurement in CBR systems is similarity, i.e. how similar a previous case is to the current situation and also reflecting how well the previous solution can be reused in the current situation. Some adaptation of the solution may be needed. If all the symptoms and conditions are identical between the current situation and the case from the case library, the same solution can be reused without modifications. Otherwise, even if a good matching case is found, other factors may influence the decision and the agent may decide to either doing nothing or applying/adapting a solution case further down the similarity ranking list or apply multiple solutions. Other factors the agent may take into account to make an informed and final decision are given below. Some of the most central factors in the context of maintenance are (not given in priority order):* how similar is the case to the current situation;* track record of how successful the case was in the past;* the confidence in the case and its solution given a current situation;* the benefit of a case (how efficiently the solution solves the problem);* the cost of implementing the solution correctly;* the consequences/costs, short-term/long-term if the agent/human is idle by taking no action; and* the consequences/costs if the proposed solution in the case is wrongly deployed.The track record of a case (factor 2) is important since a good matching case may have a less good track record and other statistics appearing against it. The confidence in a case (factor 3) may depend on the nature of the cases in the case library that are similar to the current situation. For example, if there are many similar cases nearby the same solution, the confidence grows. But if there are surrounding cases with a very different solution, the confidence in the most similar case may be reduced. The benefit of a case (factor 4) may also have a large influence on which case is selected, e.g. one case may have a solution fixing the problem temporarily while other cases offer long-term solution. Factors 5-7 are cost- and risk-related factors. They have a major influence on the selection of the final case and also if an expert should be included in the decision process. These factors should reflect company policy, current economical situation and also laws and regulations. An example would be that a specific well-matching case is the best matching case, and there is a high confidence and success rate connected to the case, but the solution in the case increases the risk of hazardous leakage damaging the environment, and environmental regulations may forbid this and may give the company unacceptable negative publicity. It may be argued that some of these factors may be included into the case descriptions and similarity measurement, but since many of them change with time and are complex in themselves, we propose that for monitoring and diagnostic tasks in industry it may be an advantage and lead to reduced complexity if these factors are handled separately in the decision process or decision support process instead of completely integrating them into the CBR system.\nDesigning and building agent-based systems using artificial intelligence: In this section, we give an example of an agent-based maintenance system both able to perform corrective maintenance, PM, and condition-based maintenance (CBM) (possibly with help of other agents and technicians).Agents are being implemented with a wide variety of different techniques; both using traditional software engineering methods and techniques such as object-oriented programming and artificial intelligence methods and techniques such as artificial neural networks or CBR. CBR (Aamodt and Plaza, 1994) offers an alternative to implement intelligent diagnosis systems for real-world applications (Nilsson et al., 2003). Motivated by the doctrine that similar situations lead to similar outcomes, CBR fits well to classify the current new sensor signals based on experiences of past categorizations. The main strength lies in the fact that it enables directly reusing concrete examples in history and consequently eases the knowledge acquisition bottleneck (Aamodt and Plaza, 1994). An example of an agent using CBR is shown in Figure 2. The sensor signal is pre-processed, compared with the case library based on domain dependent similarity matching; the best matching cases are adapted to the current situation and suggested as solution to a human.Prototype agent-based fault diagnosis system based on sensor signals and CBR - a case study\nConclusions: Artificial Intelligence offers a number of methods and techniques, which offers potential benefits if harnessed properly. The Agent paradigm and agent architecture is one of these. In combination with other methods such as CBR, the potential for building valuable systems for monitoring and maintenance is argued to be large and enable valuable properties, sometimes difficult to achieve with a traditional software architectures and engineering. This is exemplified with a system that we have implemented in the area of industrial machine diagnostics. The case-study shows that the concept of intelligent agents in combination with CBR offer a potential of increased production quality, flexibility, reliability and fast delivery times in production industry.Agents may enable improved decision making since they can be designed to learn from experience, use and transfer experience relevant to the current situations, collaborate with other agents, systems and humans. This enables flexible and modular maintenance systems where different suppliers can deliver agents, developed to be experts in a specific tasks. Most of the different parts needed for an agent-based maintenance approach have been shown to work in different research projects and case studies. To bring the benefits of agent-based maintenance systems to industry the first issue is to spread knowledge and understanding of the agent paradigm and its strengths and weaknesses. Also spreading information on successful research projects where the agent paradigm is used, or where the agent paradigm has been used as a thought concept for ideas, implementation and functionality is important. But industry does not need to wait for an industry standard framework for maintaining agents; thinking in terms of agents already gives a number of advantages in system design and development, and may lead to valuable features and functionality that may not have been thought of without this framework of thoughts.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to propose an agent-based condition monitoring system for use in industrial applications. An intelligent maintenance agent is described that is able to autonomously perform necessary actions and/or aid a human in the decision-making process. An example is presented as a case-study from manufacturing of industrial robots.\nDesign/methodology/approach: - The paper is mainly based on a case-study performed at a large multi-national company aiming to explore the usefulness of case-based experience reuse in production.\nFindings: - This paper presents a concept of case-based experience reuse in production. A maintenance agent using a case-based reasoning (CBR) approach to collect, preserve and reuse available experience in the form of sound recordings exemplifies this concept. Sound from normal and faulty robot gearboxes are recorded during the production end test and stored in a case library together with their diagnosis results. Given an unclassified sound signal, relevant cases are retrieved to aid a human in the decision-making process. The maintenance agent demonstrated good performance by making right judgments in 91 per cent of all the tests, which is better than an inexperienced technician.\nPractical implications: - Experienced staffs acquire their experience during many years of practice and sometimes also through expensive mistakes. The acquired experience is difficult to preserve and transfer and it often gets lost if the corresponding personnel leave their job due to retirements, etc. The proposed CBR approach to collect, preserve and reuse the available experience enables a large potential for time and cost savings, predictability and reduced risk in the daily work. The paper exemplifies experience reuse for quality improvement in production using a number of methods and techniques from artificial intelligence.\nOriginality/value: - The main focus of this paper is to show how to perform efficient experience reuse in modern production industry to improve quality of products. Two approaches are used: a case-study describing an example of experience reuse in production using a fault diagnosis system recognizing and diagnosing audible faults on industrial robots and an efficient approach on how to package such a system using the agent paradigm and agent architecture.\n", "role": "assistant" } ]
Behavioral and lifestyle influences on reported calorie intake: a latent class model
[ "Nutrition", "Diet", "Health", "Calories", "Latent class" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Health policy experts and marketers alike seek to understand increases in caloric intake and food choices. Policymakers need a better understanding of what contributes to obesity and weight-related diseases to develop appropriate interventions. Marketers seek to target specific segments of the population with respect to their food choices. Advanced segmentation techniques offer a dual benefit by providing health policy researchers with an increased understanding of factors driving consumption and allowing food marketers to gain improved insights for targeting particular food shopper segments.\nTheoretical background: While previous dietary research on reported calorie intake has primarily focused on demographic differences, literature from varied fields of study, such as nutrition, public health and food sociology, suggests that lifestyle and behaviors can add explanatory power when examining why certain groups of consumers tend to differ in their food consumption preferences. This paper begins by reviewing research showing how demographics can impact food consumption, and then examines how lifestyle and behavioral factors may also influence reported calorie intake. A person's body mass index (BMI) is another important aspect to consider, both in terms of actual consumption and self-reporting of calorie intake. Extant research related to the moderating role of BMI is reviewed, and the theoretical background concludes with studies that have examined the self-reporting of calorie intake.\nMethods: Data\nResults: Descriptive statistics\nGeneral discussion: This research utilized a multi-model approach to help determine which factors influence reported calorie intake and how segments of the US population differ with respect to these factors. First, a series of regression models determined the significant behavioral, lifestyle and sociodemographic variables that contribute to differences in reported calorie intake. Some of the behavioral and lifestyle variables were found to have interactive effects on reported consumption, which indicates that reasons for food choices are complex for most people. Next, these variables were used to derive and describe five distinct consumer segments.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to study the behavioral and lifestyle influences on reported calorie intake. Marketing segmentation techniques applied to self-reported food consumption can offer benefits to both health policy and marketing research.\nDesign/methodology/approach: The two-stage modeling process in this research determines important behavioral, lifestyle and sociodemographic influences on reported calorie intake. Significant predictors are then included in latent class models, which are used to derive and describe five consumer segments.\nFindings: These segments differ with respect to their food-related activities, such as dieting, grocery shopping and preparing food at home. The segments also differ with respect to lifestyle characteristics, such as household size, employment status and income. Data obtained from a multi-period probability sample help generalize the results to the US population.\nOriginality/value: The models developed in this paper can inform health policymakers by explaining reported calorie intake patterns more thoroughly than demographics alone, aiding their ability to create more targeted interventions. This approach also allows food marketers to clarify consumer insights that can be used for targeting particular food shopper segments.\n", "role": "assistant" } ]
Effects of different weights and lifting postures on balance control following repetitive lifting tasks in construction workers
[ "Falls", "Fatigue", "Weight", "Balance control", "Lifting posture" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Fall injuries are a leading cause of fatal injuries and the second most common cause of non-fatal injuries in the construction industry (Center to Protect Workers' Right, 2007). According to the US Bureau of Labor Statistics, fall injuries in the construction industry accounted for 32 percent of all work-related deaths (Bureau of Labor Statistics, 2006a) and 34 percent of non-fatal injuries (BLS, 2006b). Fall-related injuries are also prevalent amongst the general public, especially among the elderly (Zigel et al., 2009; Jiang et al., 2011). Slips, trips and loss of balance are common contributing factors to fall injuries on a level surface (Hsiao and Simeonov, 2001; Lipscomb et al., 2006). While slips and trips can be mitigated by ergonomic design of the working environment, balance control is inherently far more complex and relies upon the coordination of multiple sensory systems (visual, vestibular, and proprioception/somatosensory), the motor system and central nervous system (Punakallio, 2005; Horak, 2006). Impaired balance control (i.e. increased postural sway) has been linked to an increased risk of falls (Prieto et al., 1996; Corbeil et al., 2003; Paillard, 2012). Therefore, any potential interventions to minimize workplace falls and concomitant injuries sustained, must ensure that balance control is not impaired by personal, environmental and task-related risk factors (Hsiao and Simeonov, 2001).\nResearch methods: An experimental laboratory controlled test procedure was adopted for this research. In total, 20 healthy participants (all males) were recruited from the student population of the Hong Kong Polytechnic University to participate in this study. The participants mean age was 27.9+-4.0 years, weight was 71.0+-8.97 kg, and height was 1.74+-0.09 m. There was no significant difference in age, height, and weight of participants in both groups. Test entry criteria for participants were: no history of upper limb, back or lower limb pain/injury; and no history of neurological and/or vestibular disorders or other conditions that might affect balance control. Participants provided their informed consent as approved by the Human Subject Ethics Subcommittee of The Hong Kong Polytechnic University (Reference Number: HSEARS20160719002). Upon consent being given, participants provided their demographic data and were randomized into either a stoop lifting or a squat lifting group (ten participants each). Each participant's maximum lifting strength (MLS) in a stoop or squat lifting posture was then assessed by a back-leg lift dynamometer (Chattecx Corporation, USA). Each group of participants was assigned an allotted lifting posture (i.e. stoop or squat lifting) and requested to gradually pull up the handle of the dynamometer until they reached their perceived MLS. Each participant performed the test twice with a two-min break in between; the highest value of the two trials recorded on the dynamometer represented the participant's MLS (Piezotronics, New York Inc., USA). As a result, the participants' mean MLS for stoop and squat lifting postures was 95.4+-17.4 kg and 110.7+-13.86 kg, respectively.\nResults: Figure (1a-e) summarizes the arithmetic mean and standard deviation (SD) for RMS of CoP A/P displacement, RMS of CoP M/L displacement, MV of CoP A/P displacement, MV of CoP M/L displacement and total sway area for each balance test condition immediately after the stoop and squat lifting tasks. All CoP parameters revealed no significant difference between lifting postures in the three balance test conditions (p>0.05) although the absolute value of all CoP parameters following the repetitive squat lifting task were larger than those following a stoop lifting posture under all balance test conditions (refer to Figure (1a-e)).\nDiscussion: Analysis results revealed no significant difference between lifting postures after the fatiguing lifting task across all balance test conditions. This finding indicates that fatiguing repetitive stoop and squat lifting postures induce a similar balance control deficit. Consequently, this finding refutes our first hypothesis that the stoop lifting posture would induce greater variations in balance control than squat lifting postures following fatiguing repetitive tasks. In addition, while increased repetitive lifting weight significantly produced a larger increase in CoP parameters (both RMS and MV of CoP A/P and M/L displacement analyses), under ECF condition (when compared to either EOS or ECS condition), increased lifting weight caused no significant difference in CoP parameters (total sway area, RMS of CoP A/P and M/L displacement and MV of CoP M/L displacement) between EOS and ECS conditions. These findings confirm our second hypothesis that the fatiguing repetitive lifting tasks cause poorer balance control on an unstable surface when compared to the stable surface (Yaggie and McGregor, 2002; Corbeil et al., 2003).\nConclusions: This is the first study to evaluate the effects of different lifting weights and lifting postures on balance control following simulated fatiguing repetitive lifting tasks. The results revealed that: increased weight regardless of lifting postures significantly increased CoP parameters; stoop and squat lifting postures performed until subjective fatigue induce a similar increase in CoP parameters; and fatigue adversely effected the participant's balance control on an unstable surface than on a stable surface. These results suggest that fatiguing repetitive lifting tasks may alter the proprioception of the lower limb/back that leads to increased postural sway and suboptimal balance control on an unstable supporting surface. Consequently, fatigued-related loss of balance control may limit the safety range of movement of the body's center of gravity, and thus increase the risk of fall injuries. The findings of the present study have research and practical implications. First, the magnitude of weight during repetitive lifting task can significantly impair balance control and as such reduce the risk of loss of balance events with subsequent fall injuries. Second, surface support conditions are dependent on balance control; as such unstable supporting surfaces can significantly reduce the effort for balance control and therefore, could be useful in preventing fall injuries among construction workers. To reduce the possibility of losing balance, unstable supporting structures (e.g. scaffold, ramp) used as working surfaces should be minimized when performing static repetitive lifting tasks. Third, the findings demonstrate the potential of the suggested objective balance stability parameters in measuring static repetitive lifting task associated with fall risk resulted from extrinsic (e.g. weights of lift) and intrinsic (e.g. fatigue) factors. Construction workers can benefit from receiving adequate training in recognizing the role of lifting weights and fatigue during static repetitive lifting tasks, which would result in enhanced balance control through redesign of work and improved workers' behavior. Overall, these findings provide preliminary and invaluable information to researchers and practitioners seeking to develop practical interventions to reduce the risk of falls in construction workers (e.g. masons, rebar workers) involved in repetitive lifting tasks. Future studies should investigate the optimal working and rest durations among workers involving in repetitive lifting works in order to reduce the risk of fatigue-related balance deficit.\n", "role": "user" }, { "content": "Purpose: Repetitive lifting tasks have detrimental effects upon balance control and may contribute toward fall injuries, yet despite this causal linkage, risk factors involved remain elusive. The purpose of this paper is to evaluate the effects of different weights and lifting postures on balance control using simulated repetitive lifting tasks.\nDesign/methodology/approach: In total, 20 healthy male participants underwent balance control assessments before and immediately after a fatiguing repetitive lifting tasks using three different weights in a stoop (ten participants) or a squat (ten participants) lifting posture. Balance control assessments required participants to stand still on a force plate with or without a foam (which simulated an unstable surface) while center of pressure (CoP) displacement parameters on the force plate was measured.\nFindings: Results reveal that: increased weight (but not lifting posture) significantly increases CoP parameters; stoop and squat lifting postures performed until subjective fatigue induce a similar increase in CoP parameters; and fatigue adversely effected the participant's balance control on an unstable surface vis-a-vis a stable surface. Findings suggest that repetitive lifting of heavier weights would significantly jeopardize individuals' balance control on unstable supporting surfaces, which may heighten the risk of falls.\nOriginality/value: This research offers an entirely new and novel approach to measuring the impact that different lifting weights and postures may have upon worker stability and consequential fall incidents that may arise.\n", "role": "assistant" } ]
Confucianism: measurement and association with workforce performance
[ "Confucianism", "Measurement", "East Asia", "Best-Worst scaling", "Pedagogy/discipline", "Workforce performance" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: The effect of Confucianism on economic development has long been an area of interest to researchers seeking an explanation for the region's rapid growth (Kahn, 1979; MacFarquhar, 1980; Tu, 1989). Observing their high performance at the national and industry levels, scholars were quick to point to Confucianism as being the common denominator within East Asia. Thus, from a theoretical perspective, many believe that the values propagated by Confucian teachings equipped these societies with a modus operandi to excel in industrialization (Berger, 1988). Values such as future orientation, hard work, fervour for education, and frugality were considered the hallmark of Asia's success story and served as a basis from which other societies could follow (Berger, 1988; Hofstede and Bond, 1988). Despite consensus, analyses at the macro-level remain fairly theoretical to the extent that our understanding of how Confucian culture drives performance is speculative at best. Further, does this theory of Confucianism and performance hold up at the individual level? Recently, Baumann and Winzar (2017) posed an important question on whether \"work ethic\" is actually a cultural issue, and this study follows the call to shed light on a longstanding, yet unresolved question, still.\nTheoretical foundation: Confucianism is old - perhaps one of the oldest ideologies, philosophies, or religions in the world. Though not considered influential during his lifetime (551-479 B.C.E), the teachings of Confucius have endured to form the basis of many important values attributed to East Asian societies, including China, Japan, South Korea (hereafter, Korea), Taiwan, Vietnam and Singapore (Hoobler and Hoobler, 2009). Confucian teachings are essentially humanistic, in that they emphasize the importance of human relationships with any notion of the \"self\" as being inextricably linked to the \"group\" (Yao, 2001). At its core is the importance of social relationships, collective interests, and harmony from which one's daily behavior ought to be guided by the rules of rites and propriety (Hang, 2011). Fang (1999) conceives Confucianism as involving six core values: moral cultivation, the importance of interpersonal relationships, family orientation, respect for age and hierarchy, the avoidance of conflict, and the concept of face. Each reflects an important strategy for the maintenance of relationships, and a self-regulating moral compass in guiding behavior, as can be seen, for example, in the Chinese's business negotiating style (see Fang, 1999).\nResearch methods: Sample\nFindings: Confucian values and workforce performance: prioritizations and differences\nDiscussion: The main purpose of this study was to investigate the link between Confucianism and workforce performance amongst individuals from three East Asian societies. Our approach required us to first consider the ways in which Confucianism has been considered (and overlooked) in the extant literature, and then assess its degree of heterogeneity within and across three societies. Taking an ipsative approach to the measurement of values and performance, our study highlights some interesting results which contribute to the ongoing discussion on Confucian values and performance.\nConclusions: This study makes unique contributions in four distinct ways. First, the study considers the multifaceted nature of Confucianism, and extends the current dimensions typically used to measure Confucian values by including both pedagogical and transformative aspects. Second, this study demonstrates the utility of BWS for capturing trade-offs and providing a clearer delineation of values and performance prioritization across respondents. Third, from the data the study identifies five clusters across three societies, highlighting distinct patterns not discerned from the societal/national level of analysis alone. And fourth, this study contributes to the link between Confucianism and performance - at the micro-level - by testing the association between three aspects of Confucianism and a series of work-related performances.\n", "role": "user" }, { "content": "Purpose: This paper reconsiders the approaches to measuring Confucian values, and tests their association with workforce performance. The purpose of this paper is to examine how such values and performances are prioritized across three East Asian societies, but more importantly, identifies how variations across societies might result from the way in which Confucianism has been transformed/appropriated differently across history.\nDesign/methodology/approach: A Best-Worst experimental design is used to measure three aspects of Confucianism (relational, pedagogical, and transformative), and three aspects of workforce performance (mindset, organization, and process) to capture the trade-offs by respondents from three East Asian societies: China (n=274), Taiwan (n=264), and South Korea (n=254). The study employs analysis of variance with post-hoc tests to examine differences between societies. A hierarchical cluster analysis using Ward's method is utilized to identify clusters based on similarities within the data. And last, multiple regression analysis is applied to determine the explanatory power of Confucian values on workforce performance.\nFindings: Findings confirm the prioritization of three aspects of Confucianism (relational, pedagogical, and transformative) to differ between Mainland Chinese, Taiwan Chinese, and Korean respondents - producing five distinct clusters based on similarities across three societies. Overall, between 7 and 27 percent of the variance in workforce performance could be explained by the Confucian values included in this study.\nOriginality/value: This study highlights the \"different shades of Confucianism\" across East Asian societies, which we coin as Confucian Origin, Preservation, and Pragmatism, and demonstrates the need to take a multifaceted perspective in the measurement of Confucian culture. The study provides empirical support for the link between Confucianism and performance at the micro-level, as originally proposed by Baumann and Winzar (2017), and identifies specific antecedents of behavior for research moving forward.\n", "role": "assistant" } ]
Enhancing social tagging with automated keywords from the Dewey Decimal Classification
[ "Information retrieval", "User studies", "Indexing", "Classification schemes", "Semantics", "Controlled languages" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Knowledge organization systems (in further text: KOS) such as classification schemes and subject heading systems play a crucial role in information discovery. They help reduce the ambiguity of end users' natural language during search and improve precision and recall of the retrieved resource set. However, although they have been used as tools for information organization and retrieval for over a century now, there are several issues of applying them today; the major ones are the associated cost, slowness to accommodate new concepts and traditionally a resource-oriented rather than user-oriented approach to indexing.\n2. Background: 2.1 Rationale\n3. Methodology: 3.1 Data collection, KOS and demonstrator\n4. Results: 4.1 Number of tags\n5. Conclusions: In order to compare simple tagging as common in existing applications against tagging enhanced with suggestions from a KOS (DDC with LCSH mappings), an exploratory user study was conducted. It investigated indexing aspects and influence on retrieval when using only social tagging vs when using social tagging in combination with the KOS. The study involved a demonstrator with two interfaces, the Simple Tagger and the Enhanced Tagger, both having features common in tagging applications and the latter with the additional feature of automated suggestions from the KOS. There were 28 participants who each tagged 60 resources across four different tasks, two of which were on a controlled topic with a group task scenario, and two were on a freely chosen topic; two were completed in the Simple Tagger and two in the Enhanced Tagger. The data collection included over 11,000 web-based textual resources in the area of politics. The analysis was based on the log data of actions the participants took including the log analysis of user sessions, and on a set of three types of questionnaires.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to explore the potential of applying the Dewey Decimal Classification (DDC) as an established knowledge organization system (KOS) for enhancing social tagging, with the ultimate purpose of improving subject indexing and information retrieval.\nDesign/methodology/approach: Over 11,000 Intute metadata records in politics were used. Totally, 28 politics students were each given four tasks, in which a total of 60 resources were tagged in two different configurations, one with uncontrolled social tags only and another with uncontrolled social tags as well as suggestions from a controlled vocabulary. The controlled vocabulary was DDC comprising also mappings from the Library of Congress Subject Headings.\nFindings: The results demonstrate the importance of controlled vocabulary suggestions for indexing and retrieval: to help produce ideas of which tags to use, to make it easier to find focus for the tagging, to ensure consistency and to increase the number of access points in retrieval. The value and usefulness of the suggestions proved to be dependent on the quality of the suggestions, both as to conceptual relevance to the user and as to appropriateness of the terminology.\nOriginality/value: No research has investigated the enhancement of social tagging with suggestions from the DDC, an established KOS, in a user trial, comparing social tagging only and social tagging enhanced with the suggestions. This paper is a final reflection on all aspects of the study.\n", "role": "assistant" } ]
Unemployment benefits and immigration: evidence from the EU
[ "Immigration", "Unemployment benefit spending", "Welfare magnets", "European Union", "Benefits", "Europe" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In recent years the topic of \"welfare migration\" has raised controversial discussions and generated a substantial body of literature. There is concern that excessive participation in welfare or social security systems might be a more common phenomenon for immigrants than for natives (Cohen et al., 2009; Nannestad, 2007) or constitute a fiscal burden for host countries (De Giorgi and Pellizzari, 2009).The scope of this paper is to explore whether and how changes in countries' welfare generosity affect immigration. Instead of using an aggregate measure of welfare, such as total social public spending (which would include social assistance), this work focuses on unemployment benefits. These benefits result from a public insurance programme in which participation is conditioned on compulsory contributions during periods of insured work. The contributory nature of the programme makes immigrants' benefit recipiency directly linked to their employment experience. As described by Heitmueller (2005), expected income may be an important factor driving people's decision to migrate. Together with earnings during phases of employment, this also includes unemployment benefits that might be accessed during spells of unemployment. Hence, the presence of unemployment benefits may increase immigrants' expected income as well as help reduce its volatility. As a result, countries with particularly generous unemployment benefits could attract a greater number of (risk averse) immigrants.This hypothesis is tested by estimating the relationship between immigration inflows and unemployment benefit spending (UBS) as a fraction of the gross domestic product for a sample of European countries. Flows from EU and non-EU origins are analysed separately because immigrants from these two broad origins are likely to respond in different ways to UBS. This could be due to, for example, their diverse socio-economic characteristics or the different treatment in terms of immigration legislation (Anastassova and Paligorova, 2005), or even different eligibility criteria for unemployment programme participation. In addition, while immigrants from EU origins are free to migrate within the EU, migrants from non-EU origins do not have the same freedom.Building upon recent studies which have found no (Pedersen et al., 2008) or moderate (De Giorgi and Pellizzari, 2009) evidence of the welfare magnet hypothesis, the paper's main contribution is that it systematically studies the endogenous nature of UBS in the context of the welfare magnet hypothesis. Specifically, two potential channels of reverse causality between immigration and UBS are explored. The first is a case of simultaneity, whereby immigrants impact UBS through benefit take-up or by affecting the GDP of a country. This hypothesis is investigated by estimating the probability of unemployment benefits recipiency, conditional on unemployment, for both immigrants and natives. By doing so, it is possible to distinguish whether reverse causality arises due to the composition of immigrant population or due to immigrants' higher propensity to be in welfare. The second source of reverse causality relates to how policy reacts to immigration by cutting (or expanding) UBS. This conjecture is investigated by analysing whether changes in eligibility criteria and durations of unemployment benefits are associated with the evolution of immigration patterns.In order to address the potential endogeneity implied by reverse causality, UBS is instrumented with the number of political parties within each winning parliamentary coalition. The rationale is that social expenditure is likely to be higher (lower) in countries where coalitions comprised of more (fewer) political parties (Bawn and Rosenbluth, 2006).While the ordinary least squares (OLS) estimates indicate the existence of a moderate within-country welfare magnet effect for non-EU immigrants, the implementation of instrumental variable (IV) and generalised method of moments (GMM) approaches reveals that the impact becomes smaller and statistically insignificant. This result is taken as evidence that reverse causality produces an upward bias in the correlation between immigration and UBS. Therefore, failing to account for such a mechanism implies an overstating of the effect that an exogenous change in UBS would produce on immigration. The analysis for EU immigrants indicates that they do not react to the UBS in host countries. This result might also reflect the different nature of within-EU migration.The paper is organised as follows. Section 2 reviews studies about the welfare magnet hypothesis. Section 3 provides a description of the data and related summary statistics. The empirical strategy is outlined in Section 4, followed by OLS and IV results. Concluding remarks are to be found in Section 5.\n2. Literature review: The focus of the immigration literature on the relationship between welfare and immigration is rather recent. In the context of immigration to the USA, Borjas (1999) proposes that since immigrants in the country have already incurred large costs, they tend to cluster in states offering the highest welfare benefits. Moreover, the generosity of the welfare state will also affect the skill composition of immigration. In their simulations Brucker et al. (2002) find that welfare-generous countries attract relatively more low-skilled workers, whilst high-skilled workers prefer to settle in countries where social spending is lower, due to the lower tax burden needed to finance it. Hence, welfare generosity may induce a negative sorting of immigrants. In the context of EU enlargement Boeri and Brucker (2005) argue that when the risk of being unemployed is greater for immigrants than natives, the incentive to migrate increases with the replacement rate, and mainly for low-skilled individuals.Several empirical studies have explored the welfare magnet hypothesis. Using the European Community Household Panel for the period 1994-2001, De Giorgi and Pellizzari (2009) estimate the correlation between immigration and the net replacement rate (NRR), used as a proxy for welfare generosity. The NRR is defined as the ratio between unemployment benefits and average wages. They find that welfare generosity acts as a magnet for immigrants, but its impact is relatively weak. On the other hand, labour market conditions in the destination countries (such as unemployment rates and wages) and networks play a vital role on the decision to move. A similar analysis was carried out by Pedersen et al. (2008). Their study, based on detailed immigration flows to OECD countries for the period 1990-2000, mainly focuses on exploring the impact of social networks on immigration. However, their regression analysis also controls for total social expenditure, used as proxy for welfare generosity. Results from their preferred specification do not support the existence of a positive correlation between immigration and social expenditure.To summarise, while theory suggests that immigrants - in particular low skilled - are more likely to move to generous countries, there is no strong empirical evidence that this is actually the case. This paper contributes to the recent empirical evidence in two ways: first, it focuses on unemployment benefits as proxy for welfare generosity. Changes in public insurance programmes affect the total income that working immigrants could obtain in the potential country of destination and hence influence their decision to move. Second, issues of endogeneity of welfare generosity are directly addressed by exploring reverse causality between UBS and immigration.\n3. Data: The sample covers 19 European countries (the EU-15, excluding Greece, for which immigration inflows were not available, plus the Czech Republic, Hungary, the Slovak Republic, Norway and Switzerland) from 1993 to 2008[1]. Data were accessed from several sources. Gross immigration inflows come from the OECD Systeme d'observation permanente des migrations (SOPEMI) database, which provides consistent and harmonised data over time. These are used to calculate immigration inflows expressed as percentage of total population in a country. Missing information on flows from some countries was complemented with the data used in Pedersen et al. (2008)[2]. From SOPEMI, information on the stock of foreign-born population was obtained as well and was used to construct the social network variable (see Pedersen et al., 2008). Data on UBS were collected from the OECD Social Expenditure Database (SOCX), which provides detailed information on social welfare spending in OECD countries[3]. Complementary information on the characteristics of UBS (such as eligibility criteria and duration) and on expenditure on family, health and pension programmes was collected as well (OECD, 2002, 2007). Finally, statistics on the unemployment rate and per-capita GDP were obtained from the World Development Indicators (WDI) online database[4]. Data on the number of parties in government coalitions were collected from the European Election Database[5]. Summary statistics are reported in Table AI in the Appendix.\n4. Empirical framework and results: The hypothesis that immigration flows are correlated with UBS is tested with the following econometric model: (Equation 1) where mit indicates immigration inflows expressed as percentage of the total population in country i at time t, and xit-1 represents UBS. The equation is estimated for both EU and non-EU immigration inflows. The matrix zit-1 includes, among other covariates, the social network variable. This corresponds to the stock of immigrants from the same origin of the flows (i.e. either EU or non-EU) as a percentage of the total population. Per-capita GDP and the unemployment rate of the destination country are also included in order to control for macroeconomic fundamentals correlated with immigration inflows. To adjust for the fact that immigrants do not immediately respond to incentives in the host countries, all explanatory variables are used in their lagged values. Lags might also address problems of endogeneity, but only partially, especially if persistent unobservable shocks contained in the error term are correlated with both the response variable and the covariates in the left hand side of Equation (1). Issues of endogeneity are explored in the next subsection, where IV and GMM approaches are discussed.The model is estimated using country fixed effects; hence, the parameter of interest (b) represents the correlation between immigration inflows and UBS estimated through within-country changes. Year dummies are included as well to control for time-varying shocks common to all countries. In addition, an indicator for the years after the 2004 EU enlargement is introduced to capture changes in immigration patterns common to all receiving countries[6]. Due to the inhomogeneous size of countries, observations are weighted by population size[7].We also estimated the corresponding OLS models without country fixed effects. This invariably resulted in insignificant correlation between migration inflows and UBS. Cross-country variation could, however, mask statistically significant within-country effects. Such within-country effects may also be more relevant for the decisions of a country about reforming its welfare system. To strengthen the analysis we therefore proceed with the fixed-effect models, removing any time-invariant cross-country variation.4.1 OLS results\n5. Conclusions: This paper has explored the role of UBS on immigration using a sample of EU countries during the period 1993-2008. While the OLS analysis revealed the presence of a moderate within-country correlation between social welfare spending and immigration from non-EU origins, the IV and GMM estimates indicate that the causal effect is smaller and statistically insignificant, thereby rejecting the welfare magnet hypothesis. All estimates for EU immigrants are essentially zero, which suggests that immigration within the EU does not respond to unemployment benefit incentives.Although the results are robust across specifications, it is important to point out potential limitations of the analysis. On the one hand, the absence of information detailing the country of origin, especially for immigration from outside the EU, might be confounding the existence of welfare magnet effects. Although the effect of UBS on immigration is zero on average, it is not possible to exclude that for immigrants from certain origins, unemployment benefits constitute a strong incentive to immigrate. Future availability of detailed data will allow the exploration of this hypothesis. As for EU immigrants, the finding that the estimated effect is essentially zero could be determined by either the fact that they are more skilled, and hence less likely to be attracted by welfare states (Brucker, 2002), or simply by their freedom of movement within the EU, which leads them to rely on (or refer to) their home country unemployment benefit system.Finally, some descriptive evidence that the positive correlation between welfare and immigration arises due to two channels was provided. First, inappropriate immigration policies in Europe may lead to an adverse composition of immigrants, which in turn results in a higher unemployment benefit take-up rate on the part of immigrants. Second, welfare policies may become more generous in wake of immigration. These are fruitful areas for future research.\n", "role": "user" }, { "content": "Purpose: - Economic theory predicts that unemployment benefits may increase expected income and reduce its volatility, thereby attracting immigrants to countries which implement such programs. This article aims to explore whether and how changes in countries' unemployment benefit spending (UBS) affect immigration.\nDesign/methodology/approach: - Data are collected for 19 European countries over the period 1993-2008. The relationship between immigration flows and UBS is first tested using the OLS technique. Instrumental variable (IV) and generalised method of moments (GMM) are then used to address reverse causality.\nFindings: - While the OLS estimates suggest the existence of a moderate within-country welfare magnet effect for the inflows of non-EU immigrants, the IV approach reveals that the impact is substantially smaller and statistically insignificant when GMM techniques are implemented.\nResearch limitations/implications: - Since information on the immigrants' country of origin is not available, it is not possible to exclude that for immigrants coming from certain areas, unemployment benefits constitute a strong incentive to immigrate. This hypothesis awaits further research, once detailed data is available.\nOriginality/value: - This paper complements previous literature on immigration and welfare by exploring the endogenous nature of welfare spending. The empirical results provide insights into the interaction between immigration and welfare policies.\n", "role": "assistant" } ]
Communication technologies promoting educational communities with scholarship of engagement
[ "Facilitation", "Conflict resolution", "Interactive management", "Dramatic problem solving", "Work groups" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Dramatic Problem Solving Facilitator Model (DPSFM) and Interactive Management (IM)\nConclusion: It is clear that a significant goal for the Scholarship of Engagement is praxis. The main purpose of this paper was to offer DPSFM and IM as specific alternative dispute resolution examples for the Scholarship of Engagement that can benefit from communication technologies in collecting, recording, and analyzing data. It has the potential to translate research into practice for the betterment of society, communities, and learning institutions around the world.\n", "role": "user" }, { "content": "Purpose: - This study aims to present Dramatic Problem Solving Facilitation Model (DPSFM) and Interactive Management (IM) as innovative alternative dispute resolution approaches that incorporate communication technologies in recording and analyzing data. DPSFM utilizes performance-based actions with facilitation methods to help participants design action plans. Interactive Management (IM) utilizes a holistic, culturally sensitive and systemic approach to resolve complex problems. Communication technological assistance is accompanied by IM to enhance parsimony and support saliency of the ideas generated. Both approaches support action plans. This paper argues that both DPSFM and IM are theoretical frameworks that are valuable for conflict resolution, educators, and social science specialists who are interested in conducting research under the paradigm of knowing referred to as \"Scholarship of Engagement\". This paradigm engages participants in research while promoting action plans and improving status quo conditions. The current paper overviews the benefits of both models and makes a strong case for application of their communication technologies across a variety of contexts. DPSFM and IM are directly involved with promoting change, and supporting actions to improve human conditions. The framework of \"Scholarship of Engagement\" is introduced as an integrative model for \"praxis\". The purpose of this paper was to provide alternative options for working in facilitation groups.\nDesign/methodology/approach: - The methodological approach involved the models of DPSFM and IM.\nFindings: - The findings of DPSFM and IM applications have been positive and they were revealed in several domestic and international settings.\nOriginality/value: - The value of this work is that it presents an alternative perspective to traditional models in facilitation. The work has transferability in a number of community settings.\n", "role": "assistant" } ]
A comparative study on the national counterpart aid model for post-disaster recovery and reconstruction: 2008 Wenchuan earthquake as a case
[ "Counterpart aid", "Comparative study", "Post-disaster recovery and reconstruction", "Wenchuan earthquake", "Earthquakes", "Disasters" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Disasters are not new phenomena, but an integral part of human life (McEntire, 1999). However, disasters can occur at anytime without warning. They can cause widespread human, material, economic or environmental losses which exceed the ability of the affected society to cope by itself (United Nations International Strategy for Disaster Reduction (UN ISDR), 2006). Owing to a lack of disaster management capability and resources, when struck by a catastrophe, the local government is usually overwhelmed and unable to tackle the urgent needs of the affected populations (Kusumasari et al., 2010); therefore external assistance is often urgently required. Using case studies and comparative research on disaster aid models, Comerio (1998) found that the Mexican Government assumed primary responsibility for rebuilding after the 1985 Mexico City earthquake, the Japanese Government left the rebuilding largely to the private market after the 1995 Kobe earthquake, and the American Government attempted a merge of the public and private boundaries. Commonly, there are three types of organized aid models for post-disaster recovery and reconstruction (PDRR):\nResearch methods: This study was performed in three steps: first, a case study on the NCA model during the PWERR; second, a documentary research on the CGA, NNA and IHA models; and third, a comparative analysis of the four models.\nNCA model in the PWERR: Immediately after the implementation of the NCA program, a preliminary investigation of the operations was conducted in the disaster areas. Four months later a research paper with a systems engineering perspective was then published to instruct the NCA (Xu and Hao, 2008). Based on extensive field investigations, the NCA model was studied from a broad perspective.\nThree typical aid models in the PDRR: CGA model\nModel comparison: The PDRR is indeed a systematic project. The success of PDRR is much more than a matter of delivering and constructing houses and towns (Oliver-Smith, 1991). Consequently, many factors have to be considered. For instance, Moe (2010) studied the Katrina PDRR in terms of policy, process, priorities, problems and politics. Jahangiri et al. (2011) compared the participatory actions in the PDRR from the aspects of process, policy, organization and coordination. Through the literature review and systematic thinking, four indicators were decided on for the model comparison in this study. They are:\nFurther thought: Disasters can be examined as \"political events\" (Robinson et al., 1986). Disasters occur in a political space, and politics is an integral element of the disaster and its management. Therefore the external aid for the PDRR is also a political activity. In order to gain a better understanding of the NCA model, its political background has to be discussed. The recommendations as to how to improve the NCA model and what should be done to refine the external assistance management then will be developed by analyzing the challenges during the practical operations of the NCA model.\nConclusion: The NCA, as a systematic approach for PDRR, proved feasible and effective in the 2008 Wenchuan earthquake and again in the 2010 Yushu earthquake in China. Using the NCA approach, the Chinese Government quickly incorporated the disparate elements into national planning for the PDRR, and powerfully promoted its implementation. However, the NCA remains government-driven, reflecting China\n", "role": "user" }, { "content": "Purpose: External assistance is often urgently required when an area is struck by a catastrophe. Central government-oriented aid (CGA), national non-governmental organizations aid (NNA), and international humanitarian aid (IHA) are three typical aid models, and national counterpart aid (NCA) is an innovative model for post-Wenchuan earthquake recovery and reconstruction (PWERR). The purpose of this paper is to make a comparative study of the aid models in order to develop recommendations on external assistance for better post-disaster recovery and reconstruction (PDRR).\nDesign/methodology/approach: This study was performed in three steps: a case study on the NCA model during the PWERR; a documentary research on the CGA, NNA and IHA models; and a comparative analysis of the four models.\nFindings: The results of the comparative study show that the NCA model is more efficient and effective than the other three typical aid models. However it must be based on the premise of a centralized government.\nPractical implications: The NCA model contributes to disaster management in a developing country context. It develops a viable model for developing countries in coping with the catastrophe, can enhance their ability for domestic self-assistance, and has reference to the construction of national integrated disaster management systems.\nOriginality/value: This paper empirically researches the innovative NCA model, and compares it with commonly used aid models for the PDRR. Although it is a Chinese experience, the systematic inter-governmental collaboration of the NCA model has reference to other countries in disaster management.\n", "role": "assistant" } ]
Healthcare system performance improvement: A comparison of key policies in seven high-income countries
[ "Quality", "Primary care", "Information technology" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Improving healthcare system performance is at the forefront of the policy agenda of governments, healthcare funders and providers across the high-income world. The reasons for this have been widely documented, including rising healthcare costs, questions about service coordination and patient experience, the significant incidence of patient harm occurring from errors in treatment processes or inadequate standards of care, the promise of better systems with application of information technology and, more recently, the deepest economic crisis since the 1930s. Coupled with this have been demographic changes and population ageing, and an increasing burden of chronic disease.\nSeven countries: background information: Table I presents data on the seven countries, all which are market-based economies. The USA is the outlier with higher total health expenditure and a lower proportion of public versus private expenditure. The seven have differing methods of financing healthcare. Australia, Canada, New Zealand and England have predominantly tax-financed healthcare systems, although Australia's Medicare scheme, albeit tax-financed, functions as a form of social insurance. Insurance underpins the financing of care in several of the systems, while the methods for each differ. Germany features social insurance, offered by multiple organizations with a common fee payment schedule. The Netherlands has mandatory insurance offered by competing private organizations/agencies, so what was previously considered to be the public proportion of total health expenditure is no longer reported to the OECD. The USA has a mix of public and employer-based private insurance that covers different groups, with a substantial portion (16.7 percent) of the population uninsured. With implications for policy activities around the three issues discussed below, England, The Netherlands and New Zealand have unitary, centralized political systems. The remaining four countries have federal systems with varying levels of central control and differential capacity for intervention in state-level activities.\nPrimary care, quality improvement and information technology policies in comparison: Table III summarizes the policies in place and approaches in the seven countries to each of the three issues that are the central focus of this article. The sections below highlight similarities and differences between the seven countries.\nDiscussion: Clearly, there are considerable differences between the seven healthcare systems covered in this article in terms of how each is funded and structured. There are also differences in political systems meaning that, for some of the countries, especially those with federal structures, developing specific policy initiatives will be more difficult than for others. Nonetheless, each of the seven countries faces similar challenges when it comes to healthcare system performance improvement. Data show that healthcare expenditure growth continues to outpace the rate of general expenditure growth (Organisation for Economic Co-operation and Development, 2012) while, as cited in this article, shortcomings in the quality of care, in information technology utilization, and in access to care are evident to varying degrees in each.\n", "role": "user" }, { "content": "Purpose: Evidence suggests that healthcare system performance may be improved with policy emphasis on primary care, quality improvement, and information technology. The authors therefore sought to investigate the extent to which policy makers in seven countries are emphasizing these areas.\nDesign/methodology/approach: Policies in these three areas in seven high-income countries were compared. A comparative descriptive approach was taken in which each of the country-specialist authors supplied information on key policies and developments pertaining to primary care, quality improvement and information technology, supplemented with routine data.\nFindings: Each of the seven countries faces similar challenges with healthcare system performance, yet differs in emphasis on the three key policy areas; efforts in each are, at best, patchy. The authors conclude that there is substantial scope for policy makers to further emphasize primary care, quality improvement and information technology if aiming for high-performing healthcare systems.\nOriginality/value: This is the first study to investigate policy-makers' commitment to key areas known to improve health system performance. The comparative method illustrates the different emphases that countries have placed on primary care, quality improvement and information technology development.\n", "role": "assistant" } ]
Malnutrition in primary school-age children: A case of urban and slum areas of Bahawalpur, Pakistan
[ "Child growth", "Child health", "Malnutrition", "Anthropometry", "Epidemiology", "Slums", "Pakistan", "CIAF", "Urban areas" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1 Introduction: Economic literature suggested that health and nutrition may pay off in terms of economic growth as well as equity concerns by improving the educational performances of poor people in the developing world. The productivity and equity in harmony have an important place. Policy makers, therefore should consider how various factors effect child health and nutrition and identify changes that would improve them. Child nutrition is an important ingredient of national development, as it has powerful positive effect on human capital. A number of studies supported the evidence that health and nutrition influence children's success in school (Alderman et al., 2001; Maluccio et al., 2006). Malnutrition has significant negative effect on tests of cognitive function in both pre-school and school-age children, and on attendance and achievements (Leslie and Jamison, 1990). Moock and Leslie (1986) narrated that efforts to improve child's nutritional status may have educational benefits as well as survival and health benefits.Malnutrition in young children is conventionally measured through anthropometric measures[1], i.e. body mass index (BMI) for age, mid-upper-arm circumference (MUAC) for age, head circumference for age, subscapular skin fold for age and triceps skin fold for age[2]. The most commonly used BMI indices are stunting (low height-for-age), wasting (low weight-for-height) and underweight (low weight-for-age)[3]. Stunting is an indicator of chronic malnutrition, the result of prolonged food deprivation and/or disease or illness; wasting is an indicator of acute malnutrition, the result of more recent food deprivation or illness; underweight is used as a composite indicator to reflect both acute and chronic malnutrition, although it cannot be distinguished between them[4]. These indices are compared against an international reference population developed from anthropometric data collected in the USA by the National Center for Health Statistics[5]. Children whose measurements fall below -2 z-scores of the reference population median are considered malnourished, i.e. to have stunting, wasting or to be underweight. Those children with measurements below -3 z-scores are considered to be severely malnourished.These indices reflect distinct biological process, and their use is necessary for determining appropriate interventions. However, because they overlap, none is able to provide a comprehensive estimate of the number of malnourished children. Some children who are stunted may also have wasting and/or be underweight. Some children who are underweight may also have wasting and/or be stunted and some children who have wasting may also be stunted and/or underweight. Estimates of the prevalence of malnutrition depend on the indictors used. The use of three indices to show the prevalence of malnutrition raises a number of questions; just how many malnourished children are there in a particular population? Do the three indices provide a true picture of the scale of the malnutrition? Is underweight (currently used as the measure of malnutrition in MDG) the most appropriate indicator? Can an alternative composite measure that includes all undernourished children be developed?The objective of the current study is to estimate the determinants of malnutrition of children (five to ten years) by constructing The composite index of anthropometric failure (CIAF), which have never been constructed for Pakistan, according to our knowledge. The novelty of the study is that we have added socioeconomic variables like activity of child, whether the child goes to school or is involved in any other activity and locality of the household, etc.\n2 Methodology and estimation model: We have defined the malnourished children as the children who experience either stunting, wasting, underweight, or any combination. It represents the CIAF. The malnourished children have been segregated for each indicator, i.e. stunted representing chronic or longer malnutrition (low height-for-age: height/age), wasted showing acute or more recent malnutrition (low weight-for-age: weight/age), and underweight expressing composite of stunting and wasting (weight/height) from z-score (Waterlow, 1977; WHO, 1995). Children whose z-scores for each indicator fell below -2 standard deviation are classified as stunted, wasted or underweight[6] (Nandy and Miranda, 2008). Age and sex were taken into account when calculating the z-scores from the reference population medium. Then the CIAF is constructed for children (WHO, 1995; Nandy et al., 2005).Primary data exclusively collected for the study by cluster sample technique have been used. A survey of 600 urban and slum households of Bahawalpur city in March-April and September-October 2008, which have at least one child in the age group of five to ten years made the information valid. The age group represents the primary school-age, though Cigno et al. (2001) have defined school-age as six to 12 years due to availability of data for that age-group. The definition of slums has been adopted as given by Government of Pakistan. The survey contains the particulars of children, parents and household. The 16 clusters were selected as eight from each urban and slums. The analysis is based on health production function and the equation for the malnutritional status of children is specified as: Equation 1 where MSC is the malnutritional status of child and xj are the child and parental characteristics like age, birth-order, gender, major activity of child and father and mother's age and education, etc. xji are the household characteristics such as assets by the household, per capita income, number of children in the household, number of persons living in a room and structure of the household, and xe are community-level variables like provision of electricity, safe drinking water, flush toilet, underground drainage and locality representing whether the household is existed in urban or slum areas. u is the random error term, and is assumed to have a normal distribution.For estimation, we used logit model that is prevalent for malnutrition analysis of children (Jeyaseelan and Lakshman, 1997; Rahman and Chowdhury, 2007). The operational definition of dependent and explanatory variables are shown in Table I.It is important to note that there is a variety of other socioeconomic variables related with child, parents, household and community which may affect the nutritional status of child[7], for instance, birth-weight, health condition at birth, gestational length, recreational facilities to the children, immunization of children (in child characteristics), mother's BMI (Rahman and Chowdhury, 2007), mother's height, mother's reproductive behavior, birth-spacing practices, exposure to media (Kuate-Defo, 2001), mother's nutritional behavior, mother's age at marriage, mother's age at the time of birth, number of children born, mother's employment status, mother's income, mother's kind of employment (formal or informal), mother's working hours, parental occupation and parents' income (in parent characteristics), presence of older children in the household, family size and composition, i.e. presence of infants, schooling children, prime-age siblings and presence of grandparents (Mansuri, 2006), household income (Shehzad, 2006) and gender of head of household (in household characteristics) immunization coverage, disease control programs, public health programs, number of hospitals/dispensaries/basic health units existed in the community, number of doctors and beds in the hospital available to the community (Shehzad, 2006) and distance of health/medical facility center from the household (in community characteristics) affects nutritional status of children.\n3 Discussion of findings: We have calculated the ratio of stunting, wasting and underweight children and the CIAF children in the sample. The estimates have been shown in Table II. It is evident that for all categories of anthropometric failure, the slum children have higher ratio and among the categories the underweight have highest ratio.The logit results have been given in Table III. The results are consistent with theoretical implications of malnutrition in children.The following features of the results are worth noting.3.1 Child's age and birth-order\n4 Conclusion and policy recommendations: The purpose of our study was to see the socioeconomic factors causing malnutrition in children. We constructed the aggregate indicator of stunting, wasting and underweight named CIAF to represent the malnutrition of children. Our results provide important policy insights for declining the malnutrition in children. In the analysis, the important predictors emerged are gender and activity of child, parents' education, household per capita income, ownership of assets, number of children in the household, congestion in the household, provision of electricity, potable water and drainage facility to the household and slum locality of the household. We have contradicting results for some variables, for instance, gender aspect of malnutrition in children and mother's age. These contradictions are based on the fact that impact of these variables differs for geographic regions due to varying cultural norms (Ali, 2000). These results have implications for policy formation, where awareness about gender equity of children and keeping away the children form child labor and other related activities can play an important role for elimination of malnutrition in children. The improvement in parents' education specifically of mothers needs attention of the policy makers to enhance the children's anthropometric outcome. The information and communication effects along with education of nutrition, health and family planning programs should educate mothers not only about the benefits of smaller number of children, but also about the special nutritional vulnerabilities and attention needed for children. The household per capita income through income-generating programs and direct stipends to the poor community may be useful for sliding down the malnutrition of children. For the short run, the provision of fortified food in schools of poor communities and slums may be recommended. Fertility reduction programs may also be beneficial for the purpose along with other desirable socioeconomic objectives (Khan and Khan, 2004). The removal of congestion of the household and improvement in living condition by provision of electricity, potable water and underground drainage should be a part of development policy. In this regard, the local and provincial governments may contribute a lot in the form of provisions of public health facilities. Traditionally, improvements in water supply and sanitation have been promoted as essential public health measures to improve the population's health status. If universal piped and regulated water supplies are achieved, the incidence of CIAF would be prevented. This requires close monitoring of drinking water quality at the local level through practiced and cost-effective methods. These are critical interventions for the health of children. Slums need support system developed within the community. Community health workers who have access to every household can be trained to identify high-risk households regarding children's nutrition.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to highlight the determinants of malnutrition of primary school-age (five to ten years) children in urban and slum areas. The ultimate objective is to frame policy proposals for children's nutritional welfare.\nDesign/methodology/approach: - In this empirical study, logit model is applied to 882 observations of primary data. The composite index of anthropometric failure (CIAF) is constructed to use an indicator of malnutrition.\nFindings: - The probability for anthropometric failure increases by age, birth-order, female sex and activity of the child (child labor or home-care activity) other than schooling. The parents' education, specifically mothers' education, can play an important role for child's nutritional status. Malnutrition is positively related with congestion in the household (number of household members per room), while provision of electricity, safe drinking water and underground drainage turns out to be negatively affecting children's malnutrition. The children living in slums are more likely to experience anthropometric failure.\nResearch limitations/implications: - From the policy perspective awareness about gender equity of child, adult education, growth of household income specifically of slum areas and improvement in living conditions (through public health works program) may contribute to enhance children's nutritional status.\nPractical implications: - The slums need targeted policy for children welfare regarding their nutrition in the form of provision of public utilities and income support.\nOriginality/value: - From the methodological point of view, CIAF has been estimated as a measure of malnutrition. The findings of study may support academicians, policy makers and social activists for human development programs.\n", "role": "assistant" } ]
Artful interventions for workplace bullying: exploring forum theatre
[ "Professional development", "Workplace bullying", "Forum theatre", "Arts-based intervention" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Amy is a registered nurse, in the second year of her career. At 11:15am, she is preparing to put a drip in the arm of her 80 year old patient, but just as she positions the tip of the needle on the skin, she senses someone standing behind her. It is her ward manager; Amy's hand begins to shake. She puts down her needle on the non-sterile sheet and knows immediately she has wasted a precious resource. The manager, breaths out heavily and hisses, \"What a dreadful waste of tax payers' dollars!\"\nWorkplace bullying as a phenomenon: Phenomenology is a branch of philosophy that recognises the importance of shared meanings in the social construction of human lives (Moran, 2000). Donna Ladkin, in her book titled Rethinking leadership: A new look at old leadership questions, explores the ideas behind phenomenology in thinking about leadership. Ladkin (2010) argues that leadership can only be understood in any meaningful way if the key interacting elements involved in the phenomenon are fully considered; these are the leaders, the followers, the specific contexts and the purpose underpinning the relationship.\nAn overview of \"empowering\" interventions: One of the key challenges for researchers who investigate serious issues such as workplace bullying is to create new ways of knowing and for these to be incorporated into changes in actions within organisational practice, and we therefore applaud the idea that innovations should be based on learning in and through work processes within the operations concerned (Ellstrom, 2010). While largely neglected by scholars to date, low level interventions do exist that aim to empower workers to cope with or defend themselves in the face of workplace bullying. For example, Griffin (2004) used a method of cognitive rehearsal, where newly licensed nurses were encouraged to rote learn specific responses to bullying type behaviours in the workplace. For example, if a nurse perceived a member of staff rolling their eyes (a non-verbal innuendo), the nurse was taught to say \"I sense (see from your facial expression) that there may be something you wanted to say to me. It's okay to speak directly to me\" (Griffin, 2004, p. 260). The follow-up phase showed that all bullying behaviours had stopped after one year. This approach was also adopted by Stagg et al. (2011) who worked with 62 nurses, staging a 2-h bullying training session, in conjunction with pre-test and post-test knowledge comparisons. Like Griffin (2004), Stagg et al. (2011) believed such cognitive rehearsal had a positive impact on both knowledge of the issue and management of the problem.\nArtful approaches to empowerment: As can be seen from the discussion above, the arts appear to be a common feature underlying many of the interventions designed to empower targets of workplace bullying. Encouraging creative ways of thinking and knowing might provide employees with a new set of tools, skills and techniques to overcome complex issues such as bullying. Leavy (2009) argues that arts-based research practices provide the right type of holistic experiences with the potential to open up \"new ways to think about knowledge building: new ways to see\" (p. 254). Catterall (2005) espouses the power of arts-driven conversations in providing participants with speculations and debates, and these in turn allow the brain to generate deeper or different connections. Similarly, Taylor and Ladkin (2009) in their exploration of artful processes and products in organisations, argue that such processes:\nMaximising forum theatre: The literature on forum theatre highlights three factors that must be taken into account when maximising this type of intervention: the importance of the joker, the need to follow-up the intervention with further teaching material and the difficulties of judging performance efficacy.\nThe Vicious Circle: The Vicious Circle is a forum theatre intervention designed to empower workers by encouraging them to develop creative ideas and alternatives for reacting and responding to workplace bullying. The workshop is run in three phases and begins with group participation created by asking 12 audience members to collect a statement card and form a circle at the front of the room. These audience members are asked to act as workplace bullies, the perpetrators, by reading out the statements written on the cards given to them using the emotion and action they feel is appropriate. In the circle's centre, we place the bullying victim, the target, played by the facilitator, and they read the response to each statement and move to face each member of the vicious circle in turn. The Vicious Circle statements, as presented in Table I, lists the perpetrator statements and target responses designed specifically for use in a nursing bullying workshop. In practice, the statements can be tailored for other organisational contexts, but the verbal statements should be kept purposefully light, so that if read in isolation, they do not appear to be bullying behaviour. It is not until the circle progresses that the circle participants can see that it is the cumulative effect of one statement after the other that causes the negative impact. In this way those participating in the circle and those observing the intervention get exposed to the insidious nature of verbal attacks. The circle act can be repeated several times with different audience members taking the part of the perpetrators so that the impact is felt widely amongst the audience.\nConclusion: Workplace bullying is a widespread problem in New Zealand and internationally, and commonly recommended interventions often have little impact on stamping out this severe problem and its serious consequences. This paper argues that phenomenology offers a unique and valuable approach to understanding workplace bullying and its management and, in turn, proposes that artful ways of engaging with workplace bullying are an effective way of empowering workers to address workplace issues. Art-based interventions can help participants see and feel things that help them negotiate their lives at work and allows them \"to free more of our potential as human beings\" (Taylor and Ladkin, 2009, p. 67) and help employees to critically reflect on their current work practices and to create new ways of thinking and acting.\n", "role": "user" }, { "content": "Purpose: This paper aims to explore the phenomenon of workplace bullying in response to recent calls for the development of different approaches and provide an exploration of artful approaches to intervention.\nDesign/methodology/approach: The paper offers a unique conceptualisation of workplace bullying and applies a phenomenological lens to the issue. A review of literature explores the potential value of artful interventions, and drawing on authors' knowledge and experience with the targets of bullying, they design a forum theatre intervention for use in practice.\nFindings: This paper argues that phenomenology offers a unique and valuable approach to understanding workplace bullying and its management. In turn, the authors propose that artful ways of engaging with workplace bullying could be an effective way of empowering workers in a \"moment\" of workplace bullying, and present a forum theatre workshop specifically designed for application in workplace contexts where bullying is prevalent. The three-phase workshop aims to help employees critically reflect on their current work practices and is intended for use in small group teaching.\nResearch limitations/implications: The long-term impact of this intervention has not been evaluated against more traditional methods of addressing this problem. Future research should evaluate the efficacy of this, and other art-based interventions, in workplace settings.\nOriginality/value: Workplace bullying research is predominantly conducted from a functionalist perspective, and other methods of inquiry, such as phenomenology, are rarely considered. This paper argues for artful interventions and provides an original, tailored workshop designed to empower employees to recognise and respond to bullying in the workplace.\n", "role": "assistant" } ]
Anomaly detection based on hybrid artificial immune principles
[ "IDS training dataset", "KDD CUP'99 dataset", "Artificial immune system", "Human immune systems", "Self-organizing map", "Anomaly detection", "Negative selection" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1 Introduction: An intrusion detection system (IDS) is a system that carries out the process of identifying malicious behavior that targets a network and its resources. IDS have traditionally been classified as either misuse-based or anomaly-based (Mehdi et al., 2007; Rimiru et al., 2011). Systems that use misuse-based techniques contain a number of attack descriptions, or \"signatures\", that are matched against a stream of audit data looking for evidence of the modeled attacks. The audit data can be gathered from the network (Debar et al., 1999; Onut and Ghorbani, 2007) from the operating system, or from application log files.\n2 Components of IDS: An IDS dynamically monitors the underlying system events, and decides whether these events are attack behaviors or normal use of the system (Onut and Ghorbani, 2007; Zeng and Guo, 2009). It acquires information about its environment to analyze system behavior, discover security breaches, attempted breaches, and open vulnerabilities that could lead to potential breaches (Dasgupta, 1999b; Harmer et al., 2002; Sobh, 2006). IDSs are composed mainly of \"Monitor\" to audit the network data, \"Detection Engines\" to make decisions based on available data and system experience (intelligence), and a \"Resolver\" that combine detection engines results and alert the user or log an entry in DB. Wu and Banzhaf (2010) introduced organization of a generalized IDS as shown in Figure 1 where solid lines indicate data/control flow, while dashed lines indicate responses to intrusive activities.\n3 Artificial immune system: Forrest et al. (1994) introduced pioneer research to deal with artificial immune as a recommended solution for building anomaly detection systems (Forrest et al., 1997; Abdelhaq et al., 2011). AIS are computational systems, inspired by theoretical immunology and observed immune functions, which are applied to complex problem domains (Balthrop et al., 2002; DeCastro and Timmis, 2002; Twycross and Aickelin, 2006). AIS were developed from the field of theoretical immunology in the mid-1980. It was suggested that computer science (CS) might look at the immune system. Those systems undergo a lot of evolution since then and many researchers tried to use this metaphor in solving a wide variety of problems. AIS use many biologically inspired algorithms and theories. Examples of algorithms are negative selection, positive selection, and clonal selection. Examples of theories are self/non-self theory and DT (August, 2006).\n4 Proposed system design: The idea behind the proposed hybrid artificial immune principles (HAIP) anomaly detection is to combine more than one theory and technique of immunology in one place. The cooperation of those techniques helped in composing a multi-layered defense system that could adapt with continuous surrounding changes. Mainly we used the self/non-self theory and DT, beside a vaccination unit also inspired from natural immunology. The two theories support two different viewpoints in immunology world. However, in CS domain we believe that the both viewpoints could both be incorporated into one system building up cooperating components that benefit from each other. The vaccination unit role is the same as the one in nature. It is aimed to adapt the behavior of the system against new or evolved attacks. This combination guarantees that there is no single point of defense and more adaptive response to environmental changes. Also this combination provides a true hybrid IDS system and humane like system.\n5 System results: The proposed system was implemented and tested on a physical experimental setup network. The \"corrected\" KDD 1999 Cup dataset is used and pre-partitioned into a training set consisting of: 86,000 normal connections; 9,000 DoS; 4,000 Probe; 900 U2R; 600 R2L; and a testing set containing: 30,356 normal connections; 3,313 DoS; 2,096 Probe; 453 U2R; 105 R2L. Test was composed of several hosts behind a firewall and connected to the internet through ADSL modem, all hosts in this work used windows platform as shown in Figure 7.\n6 Related work and comparative study: Several previous works have been concerned with the problem of detecting intruder using immunological approach.\n7 Conclusion and future directions: This work shows that many theories from the natural immune system may be combined to serve as a base for more reliable IDSs. The dependence on two or more sources of data and analyzing those sources helped decision making unit a lot to take more appropriate actions. This work discusses the DT and self/non-self theory and shows how their ideas may be applied in an IDS system. The proposed system could effectively catch attacks with clear signatures and detect the source of attacks so they are recognized in black lists if they made further attacks. The adaptability feature of the system that was inherited from the natural immune system enabled dynamic control of system variables by manual or by the semi-automated VM. This allowed dynamic response for threats and attacks. The system performance is fair specially if it is implemented as a distributed one, with many hosts with different set of detectors and linked via secure messaging channel.\n", "role": "user" }, { "content": "Purpose: Anomaly detection of network attacks has become a high priority because of the need to guarantee security, privacy and reliability. This work aims to describe both intelligent immunological approaches and traditional monitoring systems for anomaly detection.\nDesign/methodology/approach: Author investigated different artificial immune system (AIS) theories and proposes how to combine different ideas to solve problems of network security domain. An anomaly detection system that applies those ideas was built and tested in a real time environment, to test the pros and cons of AIS and clarify its applicability. Rather than building a detailed signature based model of intrusion detection system, the scope of this study tries to explore the principle in an immune network focusing on its self-organization, adaptive learning capability, and immune feedback.\nFindings: The natural immune system has its own intelligent mechanisms to detect the foreign bodies and fight them and without it, an individual cannot live, even just for several days. Network attackers evolved new types of attacks. Attacks became more complex, severe and hard to detect. This results in increasing needs for network defense systems, especially those with ability to extraordinary approaches or to face the dynamic nature of continuously changing network threats. KDD CUP'99 dataset are used as a training data to evaluate the proposed hybrid artificial immune principles anomaly detection. The average cost of the proposed model was 0.1195 where that the wining of KDD99 dataset computation had 0.233.\nOriginality/value: It is original to introduce investigation on the vaccination biological process. A special module was built to perform this process and check its usage and how it could be formulated in artificial life.\n", "role": "assistant" } ]
Income diversification and bank performance: evidence from BRICS nations
[ "Concentration risk", "Risk management", "Bank performance", "BRICS", "Income diversification" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Most of the financial institutions pursue some form of diversification to reduce risk and improve performance. Elsas et al. (2010) documented that for world's largest banks, level of average diversification had been increased around 33 percent from 1996 to 2003 and have only been marginally declined since then. Financial liberalization and reforms have further prompted banks to explore alternate income channels to maintain their earnings and capture market share in the emerging business areas (Meslier et al., 2014). As a result, diversification of sources of income to fee, trading, underwriting and cross-selling-based businesses has become the most prevalent form of diversification in banks reducing their dependence on the traditional interest-based income (Stiroh and Rumble, 2006). The underlying understanding in favor of the overwhelming emphasis on income diversification is that it results significantly higher reduction in concentration risk due to imperfectly correlated income streams when compared to other forms of diversification, namely, across economic sectors or geographical boundaries (Acharya et al., 2006). Moreover, in case of emerging economies the policy changes pertaining to liberalization and opening up of the economy persuaded banks to diversify their income resources as a part of their strategic decisions to capture the untapped opportunity (Beck et al., 2006).\n2. Research methodology: As discussed in the previous section, we use FEM technique following previous studies. As our data set forms an unbalanced panel, the standard OLS models may give inappropriate results due to the potential bias caused by omitted heterogeneity and problem of endogeneity (Stiroh and Rumble, 2006). To address this problem, Gujarati and Porter (2009) advocated FEM, as the basic assumptions of this model are strict exogeneity between the covariates, constant variance and no serial correlation. This model also controls for unobserved heterogeneity pertaining to firm-specific and time-specific features. The relationship under examination is as follows:\n3. Data and descriptive statistics: The period of our study ranges from 2007 to 2015. The sample comprises financial data of banks from BRICS nations. The data form unbalanced panel of annual financial figures which are sourced from Bloomberg and World Bank database. Banks which have at least three consecutive years of time series observations and are consistently in operation during the study period are included in the sample. To control for the measurement errors, we have omitted extreme values in the bank year observations (3 percent highest and lowest values) for each computed variable. By comparing mean values of the primary sample (356 banks) and filtered sample (169 banks), it is verified that their statistical properties are similar. The final sample consists of 169 banks, dominated by Russian banks (70), followed by Indian banks (43), Brazilian and Chinese banks (21 each) and finally South African banks (14). Total bank year covered in the study is 1,415 (12 percent of data sets from Brazil; 13 percent of data sets from China; 25 percent of data sets from India; 40 percent of data sets from Russia and 8 percent of data sets from South African banks).\n4. Result and discussion: The empirical results of our fixed effect regression models are presented in Table IV, which investigates the relationship between income diversification and various dependent variables such as ROA, ROE and RISK in alternate models. The study reports mixed results for bank returns and risk across different bank categories. Column (1) denotes the regression results for Equation (1) with ROA as the dependent variable. The estimated coefficients of diversification measure are positive and significant indicating income diversification to be beneficial for improving ROA. To examine the role of size, intersection terms for size dummies (small and medium) are used in Columns (2) and (3). In Column (2), the intersection term corresponding to small banks is negative and significant indicating an adverse effect of diversification on small banks ROA. However, a significant and positive effect of diversification is reported for medium banks. In Column (3), the dummy coefficient of small banks indicates that these banks are significantly different from the other banks (medium and large). Similar results are found for medium banks.\n5. Conclusion: Financial liberalization in developed and developing nations over the past three decades has resulted in deregulation of interest rates, liberalization of credit control, privatization of financial institutions and removal of entry barriers in capital markets. These changes have prompted banks to explore alternate channels of income to maintain their income and capture market share in emerging business areas (Beju and Ciupac-Ulicii, 2012). The compelling force behind this is the opportunity of business growth in the new business segments. This shift in orientation of banks presents with a dual opportunity where they should be able to enhance their returns by entering into high growth business segments, while also reducing their dependence onto traditional sources of income (mostly interest income). The idea comes from the modern portfolio theory that suggests that efficient diversification should reduce unique risk and therefore is a structural response to the concentration risk.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to examine the impact of income diversification on bank performance in BRICS countries as a structural response to concentration risk. The authors argue that effectiveness of this approach is conditional upon its extent and quality. To understand the role of firm-specific characteristics on effectiveness of diversification, the authors examine this relationship across asset sizes.\nDesign/methodology/approach: An unbalanced panel data set of 169 BRICS banks is sampled over the period 2001-2015. Fixed effect models and system generalized method of moments techniques are used to test the relationship between diversification and bank performance using alternate measures.\nFindings: Results indicate a positive relationship between diversification and performance measured in terms of bank risk and returns for medium and large size banks. However, for small banks this relationship is negative suggesting a \"diversification discount.\"\nOriginality/value: The study indicates that diversification as a risk mitigating tool can be effective but the managers and regulators should not emphasize on the \"one-size-fits-all\" approach for all banks. Policy frameworks for controlling concentration risk should be developed keeping in mind factors like bank size, customer base and financial leverage which brings variations to the risk profile of banks.\n", "role": "assistant" } ]
IT and relationship learning in networks as drivers of green innovation and customer capital: evidence from the automobile sector
[ "IT capability", "Automotive industry", "Partial least squares", "Customer capital", "Green innovation", "Relationship learning" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Intellectual capital (IC) embodies an organisation's stock of knowledge at a particular moment. IC is a snapshot of what an organisation has learned. Underlying this view is the assumption that IC comprises three components: structural, human and relational capital (Edvinsson and Sullivan, 1996). Some authors have extended the concept of relational capital, conceptualising it as customer capital (CC) (Choo and Bontis, 2002).\n2. Theoretical framework: The arrival of the knowledge era has changed the core values and key resources needed to create competitive advantage in business. IC, such as knowledge and technique, plays an important role in creating competitive advantage and improving business value. Hence, IC is the most valuable asset and competitive weapon of any business. In addition to human and structural capital, IC includes CC, which refers to the firm's relationships with its customers (Edvinsson and Malone, 1997; van Buren, 1999). Long-term relationships with customers represent a foundation for success in competitive markets. Thus, CC comprises relationships between customers and businesses, knowledge contained in marketing channels and customer relationships, the value of relationships between the business and its customers (i.e. the contributions to current and future profits), customer depth (penetration), customer width (coverage), customer attachment (loyalty) and company profitability (Edvinsson and Malone, 1997; Duffy, 2000; van Buren, 1999). Although there is no single definition of CC, all definitions cover the relationship between the business and its customers or deal with the value of this relationship.\n3. Research model and hypotheses: 3.1 Information technology as a determinant of relationship learning\n4. Method: 4.1 Sample selection\n5. Data analyses and results: We analysed the data using structural equations with the partial least squares (PLS) technique. This method, which uses the ordinary least squares (OLS) algorithm, is designed to reflect the theoretical and empirical qualities of social sciences and behaviour, where there are usually situations with insufficiently supported theories and little available information (Wold, 1979). This study used PLS-Graph software version 03.00 Build 1058 (Chin, 2003).\n6. Discussion: As Table II implies, the model predicted 49.2 per cent of RL's variance. This explanatory power owes to IT's prominent role in the relational KM process. As Vargo and Lusch (2004) argue, the basic flow between organisations is information. It is this information that enables the coproduction of products or services among different partners or stakeholders, thereby helping to build relationships, insight and knowledge. The importance of IT raises an important research question: How can two or more firms generate cooperation spaces whereby these organisations can share the knowledge they create? According to Pawlowsky et al. (2001), the question of how knowledge is generated cannot be answered with simple learning tools. Nonetheless, it is plausible for organisations to create organisational environments that foster the adoption and diffusion of shared information systems and hence boost RL. Authors such as Nonaka et al. (2001) reached this conclusion, too, identifying the importance of IT in the knowledge conversion processes through the construction of a shared context or \"ba\". IT can thus act as an important ingredient in the design of RL among organisations by providing an infrastructure for storing, accessing and revising elements of organisational memory (Robey et al., 2000).\n7. Conclusions and implications: Despite considerable attention paid by scholars to the role of information, learning and innovation as key factors to increase IC and, consequently, achieve corporate success, little empirical research has examined the managerial actions that may be associated with creating and sustaining CC. Based on an empirical study of data for 140 automotive components manufacturing companies, this study examined how the nature and existence of three managerial actions (IT, RL and GIP) is linked to the nature and existence of knowledge exploration and exploitation processes by an organisation's workforce. We also explored how the existence of these actions contributes to the creation of CC.\nAcknowledgment: This research has been supported by the Junta de Andalucia (Regional Government of Andalusia) (Consejeria de Economia, Innovacion y Ciencia), Spain (Proyecto P10-SEJ-6081).\n", "role": "user" }, { "content": "Purpose: Despite the positive effects of customer capital (CC), questions remain over how managers enable CC growth by applying their skills and capabilities through managerial actions and strategies, such as developing information technology (IT) capability, fostering relationship learning (RL) activities and developing green innovation performance (GIP) with clients. These questions are especially pertinent in small and medium-sized enterprises and automotive industry companies that operate through supply chains, where knowledge about customers is likely to result from personal contact between customers and organisational members. The purpose of this paper is to analyse the extent to which these managerial actions were more likely to lead to the successful creation of CC.\nDesign/methodology/approach: Using the partial least squares technique, this paper studies how these three managerial actions impact on CC. To do so, data from 140 companies in the Spanish automotive components manufacturing sector have been used.\nFindings: The findings support the influence of RL on both GIP and CC. RL is a key managerial action in exploiting customer information and knowledge advantages, enabling firms to structure and reconfigure resources to produce new ways to compete and to satisfy stakeholders. In addition, results show that GIP is a determinant of CC because of its contribution to achieving sustainable competitive advantage, with GIP performing a mediating role in the relationship between RL and CC. A second contribution shows that IT is not in itself able to yield a competitive advantage, thereby validating the existence of complementary or co-focused strategic assets such as RL and GIP, which enhance IT's influence on CC.\nResearch limitations/implications: The authors were unable to explore the subtleties of the processes over time. Future research should include a longitudinal study.\nPractical implications: This study considers RL an essential factor in achieving both GIP and CC. Consequently, managers should seek to build strong RL cultures. In addition, this study shows that IT is not in itself able to yield a competitive advantage, thereby validating the existence of complementary or co-focused strategic assets such as RL and GIP.\nOriginality/value: No study has ever examined these three antecedent variables (IT, RL and GIP) together, with the aim to examine their effects on CC.\n", "role": "assistant" } ]
Product lifecycle information acquisition and management for consumer products
[ "Product life cycle", "Information management", "Consumer goods" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1 Introduction: A typical product lifecycle involves various phases which include design, components/materials acquisition, manufacture, distribution/sale, use, service/maintenance and recycle/end-of-life treatment. Data arise at each stage in the lifecycle and need to be acquired and managed in an integrated and systematic manner to provide timely and accurate information to various stakeholders (e.g. manufacturers, recyclers, suppliers). However, lifecycle data occurring in distribution, usage, maintenance and end-of-life stages are usually hard to acquire and in most cases lost. There were some research studies (e.g. \"Whitebox\" project) which investigated the acquisition of product usage data, but they were mainly concerned with the data acquisition rather than data management. In order to address the need of systematically managing the acquired lifecycle data to close the information loop, we have developed a comprehensive product lifecycle information management system for consumer products in association with a European Commission funded project entitled \"Environmental Life-cycle Information Management and Acquisition for consumer products (ELIMA)\" (Simon et al., 2004). The distinct feature of this lifecycle information management system is that it can receive and manage lifecycle data from domestic appliances occurring in distribution, maintenance, usage and end-of-life in an integrated and systematic manner to provide information and knowledge for decision-making.There are commercially available software packages for product lifecycle management (PLM). For example, mySAP Product Lifecycle Management (mySAP PLM) provides a single source of all product-related information needed for collaborating with business partners and supporting processes such as product innovation, design and engineering, quality and maintenance, and control of environmental issues (mySAP PLM, 2006; Gulledge et al., 2004). The ENOVIA software portfolio from Dassault Systems enables businesses to manage various types of data and resources, including CAD drawings and assemblies, Bill of Material (BoM), various documents, etc. (ENOVIA, 2006). However, these software packages mainly manage the product lifecycle data arising at product design and manufacture stages, they cannot manage the lifecycle data occurring during distribution, usage and maintenance, which have huge potential in creating value for stakeholders involved, especially for manufactures and recyclers (Moore et al., 2000; Simon et al., 2000).A model of lifecycle information acquisition and management for consumer products proposed within the research study includes intelligent data unit (IDU) and information management system. An IDU is an electronic device comprising sensors, controller, memory and communication interface. Each product is equipped with an IDU to acquire the data typically occurring at usage and distribution stages (Moore et al., 2000; Simon et al., 2000). The acquired lifecycle data are then transmitted to the lifecycle information management system. The lifecycle information management system consists of database management system (DBMS), communication manager, information engine, security management, and a series of service components (e.g. report generator, scheduler). The information management system formulated was used in field trials within ELIMA project to manage lifecycle data for refrigeration products and game consoles as examples of white goods and brown goods, respectively. It has been demonstrated that valuable services such as remote monitoring services, remote diagnostic services, and use pattern analysis can be delivered through the information management system.The paper is structured as follows: Section 2 reviews related work. Section 3 introduces a lifecycle information acquisition and management model for consumer products. Section 4 details the analysis and design of the framework for lifecycle information management systems. In Section 5, the information management system formulated is used to manage the lifecycle data for exemplar white goods and brown goods products. The discussions are presented in Section 6.\n2 Related studies: Some research studies in relation to lifecycle information management systems are reviewed, which include enterprise information systems, product lifecycle data and dynamic data acquisition.2.1 Enterprise information systems\n3 A product lifecycle information acquisition and management model: In order to acquire dynamic lifecycle data and manage them in an integrated and systematic manner to close the information loop, each product can be equipped with an IDU to acquire dynamic data (especially usage data), and the data obtained should be transmitted to a lifecycle information management system where information and knowledge can be generated. A model of lifecycle information acquisition and management proposed for domestic appliances is shown in Figure 1.Lifecycle information acquisition and management for consumer products is a composition of hardware (e.g. IDU), software (e.g. information system), communication facilities (e.g. internet), and suitable management tools (e.g. data management) all integrated to generate information and knowledge. In terms of implementation and deployment, the physical lifecycle information management system can be a high-performance central server or a cluster situated at the manufacturer site. Lifecycle data acquired from a set of similar products via corresponding IDUs are transmitted to the central server over wide area networks (e.g. Broadband) where appropriate processing and analysis can be performed. Since, each product has a unique ID, the lifecycle data are identified by the ID so that they can be distinct from others. In terms of security, only authorised stakeholders such as supply chain stakeholders, recyclers can have privileges to access the information system and the level of access will necessarily be controlled.Product lifecycle information management is implemented in such a way that it provides a business platform for manufacturers to interact with various stakeholders such as supply chain stakeholders, distributors, customers, maintenance providers and reuse/recyclers. This solution features both B2B (i.e. business to business) and B2C (i.e. business to customer) commercial activities. For example, access to the data and information coming from supply chain stakeholders and recyclers falls under the category of B2B, whilst access to data and information coming from end-users falls under B2C.\n4 Formulation of a lifecycle information management system framework: Product lifecycle information management system is a crucial part in the model defined. In this section, the major elements of the framework of lifecycle information management system for consumer products are described, and some of the challenges presented are discussed.4.1 Three-tier reference architecture\n5 Test cases: The product lifecycle information management system formulated has been used to manage the lifecycle data within two pan-European field trials featuring the PlayStayion2 (game consoles) and fridge/freezer (appliances) in the ELIMA research programme. ELIMA was a major European Commission-funded framework research project with eight partners that aimed to develop better ways of managing the lifecycles of products and thus reduce their environmental impact. One of the objectives of the ELIMA project was to perform two related field trials, where product lifecycle data is acquired through an IDU and transmitted to a corresponding product lifecycle information management system (Bodenhoefer et al., 2004; ELIMA Seminar, 2005). In the ELIMA field trials, fridge/freezer appliances and game consoles (i.e. PS2) were chosen as exemplar products of white goods and brown goods, respectively. Two types of IDU (i.e. EEPROM- and GSM modem-based) were devised and fitted into different product models of fridges/freezers, while one type of IDU was developed for the PS2 employing RFID technology and some auxiliary sensors and interface components.The test approach adopted can be summarised as:1. Building two specific product lifecycle information management system prototypes.2. Using the prototype systems to demonstrate that the system can provide useful information and knowledge.3. Evaluating test results. The evaluation involved testing the following criteria:* the dynamic data can be acquired;* the dynamic data can be transferred to the system and stored in a database; and* product lifecycle data can be managed to generate useful information and knowledge.5.1 Management of lifecycle data for fridge/freezer appliances\n6 Discussions: In addition to facilitating valuable services as illustrated in two field trials, lifecycle information management system can also provide useful information and knowledge to help domestic appliance manufacturers address environmental concerns required within EU directives and legislation (e.g. WEEE, RoHS)[2]. For example, WEEE (Directive on Waste Electrical and Electronic Equipment), RoHS (Directive on the Restriction of Hazardous Substance) (EUROPA, 2006) require manufacturers to take the responsibility for the collection, disposal and recovery of the product at end-of-life as well as to restrict the use of hazardous substances. This means manufacturers/suppliers/recyclers need to share a range of information in respect of the components they use in their products. This information include items from different suppliers, items from non-EU suppliers, material content of parts, traceability of parts (e.g. due to upgrade), cost of take-back and collection, etc. The lifecycle information management system formulated can be of significant assistance for these purposes. It can assist practicing managers/designers to open new horizons in product design which are sustainable and environmentally sensitive, and contribute to the wider exploration of eco-design and development of next generation consumer products (e.g. smart home appliances).The development of lifecycle information management systems requires a good understanding of engineering disciplines such as mechanical, electrical and manufacturing engineering in addition to information and communication technologies. From the research study, we learnt that the challenging issues for manufacturers to develop and deploy such a system include:* developing appropriate IDU technology to ensure good quality data can be acquired;* live transmission of the acquired data to the information system;* developing powerful statistical and analysis capabilities for data mining to provide domain-specific information and knowledge; and* developing user-friendly interfaces whilst ensuring the security of commercially sensitive product life data.The lifecycle information acquisition and management system formulated does not currently adopt Auto-ID technologies. For example, the product identities do not incorporate the EPC code, and information system is not an EPC-enabled information system which can be looked up through ONS. However, refinement of lifecycle information management system to incorporate the EPCglobal Network is not a difficult issue to address.The product lifecycle information acquisition and management model proposed was devised for consumer products. However, there are other sectors in industry which can benefit from such a model. For example, in the automotive industry, life cycle data generated during the use of the vehicle can be collected, stored, transmitted, managed and analysed through the application of the model proposed. The authors are confident that lifecycle information acquisition and management model proposed in this paper can be extended to apply to most product sectors. Theoretically, technologies such as sensors, Auto-ID, wireless communication can be applied to any physical products (e.g. cup, table) to acquire their life cycle data and transmit them to the information system.\n7 Conclusions: A product lifecycle information acquisition and management model for consumer products is proposed, and a lifecycle information management system conforming to this model has been developed. The information system formulated was used to manage the lifecycle data of game consoles and fridge/freezer appliances during the ELIMA field trials. It has been demonstrated that both lifecycle information management systems can effectively manage the product lifecycle information in an integrated and systematic manner to provide valuable services. The lifecycle information management system can open new horizons for product design in terms of realising sustainable and environmentally sensitive solutions, and contributes to the wider exploration of eco-design and development of next generation consumer products (e.g. smart home appliances).\n", "role": "user" }, { "content": "Purpose: - This paper aims to capture and manage the product lifecycle data for consumer products, especially data that occur in distribution, usage, maintenance and end-of-life stages, and to use them to provide information and knowledge.\nDesign/methodology/approach: - A lifecycle information acquisition and management model is proposed, and an information management system framework is formulated. The information management system developed is then used in actual field trials to manage lifecycle data for refrigeration products and game consoles.\nFindings: - It has been demonstrated that valuable services can be delivered through a lifecycle information management system.\nPractical implications: - Lifecycle information management systems can open new horizons for product design which are sustainable and environmentally sensitive. They also contribute to the wider exploration of eco-design and development of next generation consumer products (e.g. smart home appliances).\nOriginality/value: - Existing lifecycle information systems cannot support all phases of the product lifecycle. They mainly manage the lifecycle data only during the design and manufacture stages. Lifecycle data during distribution, usage, maintenance and end-of-life stages are usually hard to acquire and in most cases lost. The lifecycle information management system developed can capture them, and manage them in an integrated and systematic manner to provide information and knowledge.\n", "role": "assistant" } ]
A conceptual continuous improvement implementation framework for UK manufacturing companies
[ "Implementation", "Continuous improvement", "Manufacturing", "Framework" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: In order to build on previous research and provide a research output of practical relevance a conceptual framework for the implementation of a continuous improvement initiative in UK manufacturing companies is presented.\nResearch background: The research was initially focussed on the identification of themes contributing to the failure of continuous improvement initiatives within manufacturing environments.\nMethodology: In order to achieve the aims of the research, the author completed a critical literature review. All searches were initially conducted using the ABI/INFORM database advanced search function. Across various disciplines, ABI/INFORM is amongst the databases regularly used for conducting a literature review (Frank et al., 2011; Gimenez and Tachizawa, 2012; Tari, 2011). In addition, Emerald Insight was also used to ensure that all relevant literature has been included. The authors are confident that the findings from these databases are representative of the literature available within the search parameters detailed.\nEvaluation of current models and frameworks: Each existing model or framework found in the literature was reviewed in order to identify positives and areas for improvement (Table I). Each positive element identified was deemed worthy of note and considered for inclusion in the new conceptual framework. The areas for improvement serve to demonstrate that none of the current models or frameworks exactly fulfil the needs of this research. When reviewing each existing model or framework the criteria against which it was evaluated was:\nDescription of new framework: The new conceptual framework presented has been developed by incorporating the eight failure themes derived from literature, along with the positive aspects identified from the other models or frameworks reviewed (Figure 2). As it explains what influences implementation outcomes and details a structure and plan for achieving these, the implementation guide detailed is categorised as a framework rather than a model. It can be more specifically defined as a determinant framework. The framework contains several barriers or enablers (determinants) which influence the overall outcome of the implementation. As has been the case with this research, determinant frameworks are typically developed by collating results from previous studies of barriers and enablers, through the review of existing frameworks or through the originators own experience (Nilsen, 2015). The new conceptual framework has utilised all of these approaches.\nSWOT analysis: A SWOT analysis is a common tool used for evaluation. A SWOT analysis has therefore been completed on the new conceptual framework prior to its review and finalisation (Table III). Through this it is clear that whilst the framework contains several strengths; the main weakness and threat that exist is that it has received no real world validation at this stage of the research process. This subsequent review also serves as the main opportunity to separate this framework from others.\nManagerial implications: There are many benefits of the conceptual framework presented. It provides a clear and concise view of the high level process through a summary of the phases and activities necessary in a continuous improvement implementation. As it is specific to manufacturing companies it is also better tailored to their needs. This will create awareness for senior managers desiring to improve their business, and the role they need to play within that. For continuous improvement practitioners, or consultants, it also details some of the elements and processes that will need to be developed within the implementation whilst not dictating the exact format of these.\nDiscussion: Having identified the need for industry and region specific implementation frameworks, an extensive evaluation of existing frameworks was undertaken through a critical literature review. This was done in order to initially confirm the absence of a continuous improvement implementation framework for UK manufacturing companies, and second to understand the strengths and format of current offerings. Through this it is clear that a gap not only exists with the specific focus (industry and region) of the existing frameworks but also with the content and format of many.\nConclusions and future research: There is an acknowledged need for frameworks specific to both industry and region. Through detailed examination of existing frameworks it has been demonstrated that the requirements of an implementation framework for the UK manufacturing industry have not previously been fulfilled. It has subsequently been demonstrated that the new conceptual framework better addresses these needs.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to review current continuous improvement implementation models and frameworks, and present a new conceptual framework tailored specifically to the needs of UK manufacturing companies.\nDesign/methodology/approach: The paper details the in-depth review of 20 existing models or frameworks for continuous improvement implementation, demonstrating the need for a new industry and region specific solution. Through the incorporation of previous findings from literature, coupled with the positive aspects of the current offerings, a new conceptual framework for continuous improvement implementation in UK manufacturing companies is presented.\nFindings: The paper demonstrates the need for industry and region specific frameworks and demonstrates that current models and frameworks do not completely fulfil the objective of the research. A new continuous improvement implementation framework is therefore presented in a user friendly format, comprising positive aspects of previous models and frameworks where applicable.\nResearch limitations/implications: The focus of the research is limited to the UK and the manufacturing industry. It is therefore not clear the applicability of the framework presented to other regions and industries. The framework presented is also conceptual at this stage and requires validation through interview and application.\nPractical implications: The framework presented will provide senior managers wishing to implement continuous improvement an understanding of the high level process, as well as awareness of their role within the roll out. Overall, the framework provides a clear guide whilst also allowing for individual tailoring of the elements contained to the needs of the company.\nOriginality/value: The paper presents a new conceptual framework; the first to be tailored to the needs of UK manufacturing companies. The framework has also been demonstrated to be a more robust solution than those currently in existence.\n", "role": "assistant" } ]
Frequency of CRM implementation activities: a customer-centric view
[ "Customer relationship management (CRM)", "Optimal timing", "CRM implementation", "Customer relations", "Customer retention" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.Customers are a key factor that keeps an organization operating effectively. Indeed, Kale (2004) posits that developing and maintaining customer relationships is vital for competitive advantage. Acquiring and retaining customers whose lifetime value to the firm is propitious affords a company opportunity to achieve its objectives. Customers are the lifeblood of any firm (Dubinsky, 1999): they nourish the company financially. Failure to pay heed to customers and their variegated needs can leave a marketer bereft of the capacity to leverage the company's competencies. In an effort to provide customers with the \"perfect customer experience,\" customer relationship management (CRM) has become the sine qua non for many organizations as a means of enhancing performance (Payne and Frow, 2005, 2006; Javalgi et al., 2006).\nBackground literature: Use of information technology to manage customer relationships has been perceived as a critical dimension of relationship marketing (Anderson et al., 2007). Because of this, CRM is often misused with the term CRM technology (e.g. Reinartz et al., 2004; Boulding et al., 2005; Frow and Payne, 2009). Kale (2004) argues that this faulty interpretation may be a significant factor contributing to CRM failures. Recent research has attempted to correct this over simplification by broadening the view of CRM and regarding it as a strategic method that focuses on development of profitable, long-term relationships with key customers by melding relationship marketing and information technology and aligning the organization to foster such relationships (Payne and Frow, 2005). As such, CRM is a holistic strategy for managing customer relationships at the corporate level (e.g. Zablah et al., 2003). Benefits of CRM include enhanced information availability for increased coordination of customer service activities and improved performance effectiveness and efficiency of customer contact personnel (Landry et al., 2005).Successful CRM implementation requires attention to four crucial areas: strategy, people, processes, and technology (Crosby and Johnson, 2001; Yim, 2002). Operationalizing this perspective, Yim et al. (2004) propose that implementation entails focusing on key customers, organizing around CRM, managing knowledge, and incorporating CRM-based technology. These four major activities virtually parallel aspects of a broad definition of CRM developed by Payne and Frow (2005) that has gained some academic acceptance (Boulding et al., 2005). To achieve CRM effectiveness in an organization, a firm should develop customer-focused learning and an interaction orientation culture (Kumar and Ramani, 2003, 2006). Bohling et al. (2006) view CRM implementation as a holistic process containing fifteen components; among those, CRM strategy and value proposition development is a core part of CRM implementation.Boulding et al. (2005) aver that the core of CRM is \"dual creation of value.\" Through CRM activities, value is created for both the marketer and the customer. That is, rather than solely trying to maximize value for the firm, management should also be concerned with creating value for the customer. Value creation entails discerning what value the organization might offer customers, ascertaining what value customers provide the firm, and maximizing the lifetime value of the customer (Payne and Frow, 2005). Customer lifetime value (CLV) is the predicted profitability of a customer during the entire relationship with the company (Kale, 2004). CLV changes over the life of the relationship and requires integration of the organization with the entire CRM process (Boulding et al., 2005). CRM efforts necessitate an ongoing balance between the customer and the firm as the relationship evolves through its stages (Reinartz et al., 2004). Moreover, distribution of value across customers is uneven (Reinartz et al., 2004). Therefore, marketers must balance their customer portfolios vis-a-vis their customers' needs, company efforts to foster customer value, and the means by which customers provide value for the firm (Boulding et al., 2005). Part of this entails correctly identifying responsive and profitable target customers (e.g. Cao and Gruca, 2005).CRM and customer-centric management\nHypotheses: Based on the preceding discussion, we propose the following hypotheses:H1. Customers' attitude toward CRM activities affects their evaluation of the interval frequency of CRM activity implementation, such that:H1a. Customers who have a favorable attitude toward CRM activities prefer a shorter interval compared to customers who have an unfavorable attitude toward CRM activities.H1b. Customers who have an unfavorable attitude toward CRM activities prefer a longer interval compared to customers who have a favorable attitude toward CRM activities.H2. Customers' attitude toward CRM activities affects the relationship between the perceived incompatibility of CRM activity frequency and customer retention, such that:H2a. If the interval of CRM implementation is shorter than their preferred level, customers who have a favorable attitude toward CRM activities will have a higher willingness to maintain the relationship with the company than customers who have an unfavorable attitude toward CRM activities.H2b. If the interval of CRM implementation is longer than their preferred level, customers who have a favorable attitude toward CRM activities will have a lower willingness to maintain the relationship with the company than customers who have an unfavorable attitude toward CRM activities.H3. Customers and CRM personnel have different perceptions about how frequently CRM activity implementation should be.The research model of the study is shown in Figure 1.\nMethod: Sample\nResults: Findings for H1\nDiscussion: Because of the dearth of CRM studies regarding customer perceptions of CRM implementation efforts, this research examined the relationship between the frequency of CRM implementation activities and customers' perceptions (favorable or unfavorable) toward that frequency, as well as customers' expectations of the frequency of those actions vis-a-vis their willingness to remain in the relationship with the marketer. Specifically, we explored the potential incompatibility between the actual time interval of CRM implementation efforts and customers' preferences. We did so by comparing customers who have a more favorable view of CRM efforts with those who have a less favorable perspective. We also investigated whether CRM personnel and customer perceptions regarding the frequency of CRM implementation differ. Such issues have not heretofore been studied.Our work extends a customer-centric perspective that emphasizes the importance of customers' attitude toward CRM activities in CRM implementation. Prior research has focused on the marketer's perspective in CRM but not on the customer's. To the authors' knowledge, this is the first investigation to specifically consider customers' perceptions in a CRM context, thus adding to the customer-centric literature in the CRM arena. The study also contributes to the empirical evidence regarding the relationship between CRM implementation activities and CRM performance (when intention to remain in the relationship is used as the indicator of performance). Heretofore, scholars had not considered the issue. Furthermore, the present work offers managers insights into the development and implementation of CRM activities based on customers' evaluations. Moreover, the study provides CRM implementation guidelines by focusing on customers' expected time interval between CRM contact activities. In conjunction with efforts to match an appropriate interval with customers' evaluations, however, the investigation revealed that CRM implementation also needs to focus on the usefulness of the information that CRM activities provide customers.According to the findings of the study, an incompatibility exists between implementation frequency of CRM activities and customers' frequency preference for the activities, and there is a significant impact from this incompatibility on whether customers desire to maintain their relationship with the marketer. Additionally, the results demonstrate that the perceptions (favorable or unfavorable) customers have toward CRM activities moderates their attitudes toward the frequency with which CRM is implemented. And this relationship tends to hold when the CRM activity imparts valuable information to those two customer groups and whether customers intend to retain their relationship with the company.Essentially, those customers who are more favorably disposed toward CRM implementation activities are in favor of shorter interval periods between CRM communications than those who have less a favorable opinion of CRM efforts. This finding suggests that the customer group which is in favor of the CRM efforts is less likely to be adversely affected if CRM implementation activities occur more often than they expect. Conversely, those customers who hold less favorable attitudes toward CRM-related activities feel that a longer time interval is more desirable; that is, they are sensitive to CRM efforts that exceed their anticipated frequency.Study results also indicate that CRM personnel and customers have incongruent perceptions regarding the frequency with which CRM efforts should be undertaken. Customers believe that CRM implementation activities should be directed at them less frequently than CRM personnel feel. The incompatibility could lead to inefficient use of marketing resources (i.e. too frequent CRM implementation) and customer disaffection.Managerial implications\n", "role": "user" }, { "content": "Purpose: - Customer relationship management (CRM) is considered a means to create competitive advantage for a company, as well as influence organizational performance. Much research has explored CRM users' point of view vis-a-vis successful CRM implementation, yet little concern has been shown regarding customers' viewpoints toward these same actions. This is surprising given that one of the beneficiaries of CRM is the customer. This paper aims to report the results of a study that explored the gap between actual bank CRM actions and customers' expectations of those actions in relation to CRM customers' intention to remain in the relationship.\nDesign/methodology/approach: - This study explores the gap between actual bank CRM implementation and customers' expectations of those actions in relation to customer retention using a survey method. A research model is presented to illustrate the theoretical relationships of the research.\nFindings: - The findings indicate that an incompatibility exists between the interval of actual CRM implementation activities and customers' expectations of the interval and that this incompatibility has an adverse effect on customers' willingness to remain in the relationship. Additionally, customers and CRM personnel hold different perceptions regarding the frequency with which CRM implementation activities should be executed.\nPractical implications: - Implementing CRM service efforts should be compatible with customers' expectations. Therefore, companies should pay keen attention when selecting the optimal frequency of CRM implementation so that it meets customers' expectations. Also, firms may be having too frequent CRM contact with customers, thus creating inefficient use of CRM resources.\nOriginality/value: - This paper explores selected variables that may influence CRM performance vis-a-vis its implementation. The research provides the unique perspective of the customer as a major factor to consider for successful CRM implementation.\n", "role": "assistant" } ]
Authoritarian leadership, organizational citizenship behavior, and organizational deviance: Curvilinear relationships
[ "Organizational deviance", "Authoritarian leadership", "Curvilinear", "Benevolent climate", "Supervisor-directed OCB" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Authoritarian leadership is a prevalent leadership behavior in collectivistic cultures and high power distance societies and has demonstrated inconclusive findings (Chan et al., 2013). It is conceptualized as a leader's behavior that asserts absolute authority and control over subordinates and demands unquestionable obedience (Cheng et al., 2004). The extant research suggested that authoritarian leadership influences employees' behavior negatively and that it has detrimental effects on task performance (Chen et al., 2014). On the other hand, research has revealed that authoritarian leadership is related to positive job attitudes and contributes to firms' revenue growth (Cheng et al., 2004; Huang et al., 2015). Despite the recognition of the mixed and inconclusive findings of the effect of authoritarian leadership, previous work has not paid sufficient attention to possible non-linear relationships of authoritarian leadership with employees' and organizational outcomes (Chan et al., 2013).\nTheory and hypotheses: Authoritarian leadership and OCBS\nMethods: Research design and procedure\nData analysis and results: To check the convergent validity of the scales assessed by employees, we conducted a confirmatory factor analysis (CFA). The CFA shows that the proposed 3-factorial structure with three latent variables for authoritarian leadership, benevolent climate and organizational deviance fit the data well. The model fit indices showed satisfactory results with goodness of fit index=0.91, normed fit index=0.92, comparative fit index=0.95 and root-mean-square error approximation=0.06.\nDiscussion: The primary objective of this study was to understand how authoritarian leadership influences employee OCBS and organizational deviance behavior. Drawing on the social exchange theory, we proposed curvilinear relationships between authoritarian leadership and both OCBS and organizational deviance behavior. Furthermore, we expected that the benevolent climate would moderate the curvilinear relationship between authoritarian leadership and OCBS.\nConclusion: Previous research has demonstrated mixed effects of authoritarian leadership on subordinates' outcomes such as job attitudes, discretionary behaviors and performance. We found curvilinear relationships between authoritarian leadership and subordinates behaviors, i.e. OCBS and organizational deviance behaviors. It thus helps to explain inconsistent and inconclusive research findings in the prior studies. We further found that benevolent climate moderates the curvilinear relationship between authoritarian leadership and OCBS, and it buffers subordinate's inclination for deviance under high authoritarian leadership. Our findings provide new insights into the relation between authoritarian leadership and OCBS and provide context to understand the contingencies of authoritarian leadership. We conclude that moderate levels of authoritarian leadership relate to an optimal degree of employees' OCBS and least organizational deviance. Furthermore, the benevolent climate confines the negative influence of authoritarian leadership on employees' outcomes.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to examine the relationships between authoritarian leadership, organizational citizenship behavior toward one's supervisor (OCBS) and organizational deviance. The authors hypothesized curvilinear relationships between authoritarian leadership and OCBS, and between authoritarian leadership and organizational deviance.\nDesign/methodology/approach: The authors analyzed two-source survey data of 240 employee-supervisor dyads collected from seven organizations in Pakistan.\nFindings: Employees exhibited most OCBS and least organizational deviance at intermediate levels of authoritarian leadership. Employees' perception of a benevolent climate at work moderated the curvilinear relations.\nResearch limitations/implications: The authors cannot draw causal inferences because of cross-sectional data. Furthermore, the authors' results may be limited to cultures with high collectivism and high power distance.\nPractical implications: This study envisions and illuminates a new avenue of curvilinear relationships among authoritarian leadership, OCBS and organizational deviance.\nOriginality/value: The two sources (employee-supervisor dyads) data collected from seven organizations supported a unique curvilinear relationship between authoritarian leadership, OCBS and organizational deviance.\n", "role": "assistant" } ]
The creativity passdown effect: applying design theory in creating instance design
[ "Action research", "Design research", "IT artifact", "Design science" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Theory is regarded as a central element of scholarly works, and the substantiation of a strong theoretical contribution is deemed to be one of key criteria for judging the quality of scholarly works (Whetten, 1989; Sutton and Staw, 1995). In design science research on information systems, the notion of design theory (DT) has been proposed by several scholars (Gregor, 2006; Walls et al., 1992). DT operates under an assumption that scientific and theoretic approaches to design products and processes are meaningful and possible (Simon, 1996). Further, DT is often conceptualized as being a distinct type of theory that may be different from what is posited as theory in natural or social science (Gregor, 2006). In design science research on information systems, a design product (design as a noun) and a design method (design as a verb) are considered to be two constituents of a DT (Walls et al., 1992).\nDT and design science research: Design science is grounded in Simon's seminal work on the science of design (Simon, 1996) which argues that a scientific approach to design is meaningful and valid. Drawing upon this notion of design as a scientific study, several design science researchers in the information systems discipline have argued for the importance of codifying the theoretical knowledge in design science research (Gregor, 2006; Gregor and Jones, 2007; Kuechler and Vaishnavi, 2008; Walls et al., 1992). An early formulation of DT was established by Walls et al. (1992) who argued that DT can be conceptualized as having two dimensions: design product (\"design\" as a noun) and design method (\"design\" as a verb). Building on Walls et al. (1992), several researchers have advanced and extended the notion of DT. Markus et al. (2002) demonstrated that DT can be presented in a loosely structured way that embodies a set of design principles. In a similar vein, van Aken (2004) developed and presented DT in management using a form of technological rules. Gregor and Jones (2007) developed a more structured and generalized way to present DT, which includes additional conceptual elements such as artifact mutability. Further, Goldkuhl (2004) presented a framework that can be used to ground DT in multiple ways.\nDT as a scientific knowledge basis for creating an IDA: While adopting a good DT provides a basis for creating an IDA, it may not necessarily guarantee the successful creation of a good IDA. In fact, creating an IDA may require significant creativity on the mind of the instance designer. One essential element in the process of adopting a DT as a basis for building an IDA may concern the degree to which the DT influences the activities carried out by the instance designer. DTs essentially operate at the abstract level in which general guidelines are specified to deal with a class of design problems. Logically, this abstraction more-or-less creates a gap in creative thinking that needs to be filled by the artifact instance designer in order to translate a DT into practice and to deal with a specific IDA problem of the same problem class. DTs provide a scientific knowledge base for creating a class of IDAs, and ultimately the application of DTs can in fact help making design science more visible and practical (Willem, 1990).\nThe creativity passdown effect: In this section, we describe the creativity passdown effect which occurs when the creative thinking of a team of design theorists inherent in DT is handed off to a team of instance artifact designers. Prior research on creativity indicates that \"people are more likely to produce unusual, useful ideas if they are given license to do so by the situation or by explicit instructions\" (Amabile et al., 1986, pp. 1159-1160). As such, it appears likely that adopting a DT will provide explicit instructions (vis-a-vis how to deal with a certain class of problems) to the team of instance designers to address an instance problem of the same class. In doing so, it will create an opportunity for the team of instance designers to creatively translate a DT into practice. Further, we suggest that the act of creating an IDA is facilitated by design thinking (Brown, 2008) which is focussed on balancing between existing design knowledge (e.g. analytic thinking) and experiential knowledge concerning a particular design problem (e.g. creative thinking).\nAction case: The methodology applied in our study can be described as an action case. An action case is a hybrid research approach that combines action research and interpretive case approaches(Braa and Vidgen, 1999). It combines intervention and interpretation in order to achieve both change and understanding. It is a form of soft field experiment with less emphasis on iteration and learning and more on trial and making. The approach is holistic in philosophy, and prediction is not emphasized. The intervention in our case was that of an IDA team adopting a previously published DT as a scientific knowledge basis for creating an IDA. Our particular focus was to investigate the creativity thinking of an IDA team in using the adopted DT for creating an IDA (the creativity passdown effect).\nGeneral discussion: We still have much to learn about applying the design science paradigm and the DT concept in the field of information systems. In this study, we examined one of the central critiques: that the application of DTs diminishes the creative act of design. Against this critique, this study suggested that the application of DTs can be understood as a deferred two-team design process. A team of design theorists who creates a DT for a class of problems, and a team of instance designers who creates the design for an IDA that ideally matches a specific problem setting. Our experience in the action case suggests that using a DT in creating an IDA may encourage design thinking, and in certain way increase its power and practical relevance by fostering the creative mind of instance designers. Indeed, DTs provide a scientific basis for dealing with an instance problem, and this evokes the creativity mind of instance designers. Without such a scientific basis, it is a lot more challenging for instance artifact designers to deal with instance problems.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to suggest that translating a design theory (DT) into practice (e.g. creating an instance design artifact (IDA)) is hardly straight-forward and requires substantial creativity. Specifically the authors suggest that adopting a DT embodies a creativity passdown effect in which the creative thinking of a team of design theorist(s) inherent in DT invokes a creative mind of a team of artifact instance designer(s) in creating an IDA. In this study, the authors empirically investigate the creativity passdown effect through an action case in which a DT (DT nexus) was applied in creating an IDA (multi-outsourcing decision-making tool).\nDesign/methodology/approach: The case methodology applied here is described as an action case. An action case is a hybrid research approach that combines action research and interpretive case approaches. It combines intervention and interpretation in order to achieve both change and understanding. It is a form of soft field experiment with less emphasis on iteration and learning and more on trial and making. The approach is holistic in philosophy, and prediction is not emphasized. The intervention in the case was that of an instance designer team introducing a previously published DT as a basis for creating an IDA.\nFindings: The experience in the action case suggests that using a DT in creating an IDA may encourage design thinking, and in certain way increase its power and practical relevance by fostering the creative mind of instance designers. Indeed, DTs provide a scientific basis for dealing with an instance problem, and this evokes the creativity mind of instance designers. Without such a scientific basis, it is a lot more challenging for instance artifact designers to deal with instance problems.\nResearch limitations/implications: This study contributes to the literature concerning design science research, as it challenges the notion that adopting scientific design knowledge limits creativity inherent in creating IDA by illustrating creative elements involved in adopting DT as a basis for creating IDAs.\nPractical implications: This study offers implications to practice, as it provides new insights regarding how DT can be used in instance design activities.\nOriginality/value: A report of this research previously appeared as a conference paper. However, the attached journal version has been completely rewritten to additionally contribute to the literature concerning design science research beyond the conference version. More specifically, in this version, the authors conceptualize adopting a DT to build an IDA as a theoretical basis, and the authors challenge the notion that adopting scientific design knowledge limits creativity inherent in creating IDA by illustrating creative elements involved in executing DT as a basis for creating IDAs.\n", "role": "assistant" } ]
Analysis of fitment of ERP and conceptualization of its substitute in small segment - a strategic approach
[ "Manufacturing resource planning", "Return on investment", "Linear programming" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In this paper exhaustive industry analysis has been done to identify the segments, the key success factors in each segment and the logic of industry correctly. All key success factors in the target segment (up to 50 crore INR turnover, 12 M USD) have been analyzed and compared with offerings of established ERP players. An IT strategy analysis has been done to analyze cash flow and Return on Investment of ERP implementation. This establishes logically the validity of ERP in small segment. An optimized standard operating procedure has been conceptualized as substitute of ERP and savings has also been calculated for that.This paper develops advanced concept in supply chain planning specifically for small and medium scale industries. Without implementing expensive software an SME can gain by the recommendations of this paper. A particular segment of enterprises whose turnover is up to INR 50 Crore (12 M USD) is the sole segment of interest of this paper.\n2. The industry analysis: The primary objective of industry analysis is to identify the opportunities and threats in external environment, to identify different key success factors in different segments, to identify the logic of the industry, the positioning in different segments and the estimated market size. This analysis is also called external analysis. Market segmentation is the process of dividing a market into distinct subsets (segments) that behave in the same way or have similar needs. Since each segment is fairly homogeneous in their needs and attitudes, they are likely to respond similarly to a given marketing strategy.Following possible segmentation variables can be conceptualized for ERP service segment (see Table I).This market segmentation was based on data available in IDC and Gartner. The main objective of market segmentation is to get an idea on key success factors in each segment. From key success factors positioning in each segment can be conceptualized. Let us prepare a segmentation MATRIX to get all possible key success factors for all segments (see Table II).From this segmentation MATRIX the expectation from all different segments have come up. We can frame logic of industry from this. Logic of industry is the key, basic and fundamental requirement of the industry which defines the rule of the game. Critical success factors or key success factors are drivers all the time for ERP and IT solution selection (Bradley, 2008).Shorter implementation cycle\nMicrosoft: Microsoft has its ERP product 'NAVISION' for SMB segment\n3. The ROI analysis: ROI is the one of the important points in the logic of the industry. The segment upto 50 crore INR (12 M USD) is extremely ROI conscious. Let us try to analyze possible expenditure and benefits from ERP implementation in any general case. Cash flow analysis will be the key to analyze whether small segment can really afford it.Cash Flow analysis for any ERP implementation is identical for any small or large implementation. Let us refer to a Harvard Business School Case, \"Whirlpool Europe\" (Balachandran and Ruback, 2002). Cost and Benefits of ERP implementation in Whirlpool - Europe has been discussed in the case.Three fundamental benefits have been conceptualized. These are in the areas of working capital reduction, revenue and gross margin increase and right sizing of the company.From the case facts we can know that the company has 51 days of sales as finished goods inventory. Out of 51 days, approximately eight days are for reserved and allocated units, nine days are for in transit and three are obsolete. The ERP system has enabled Whirlpool to make its supply chain more transparent and efficient, thereby eliminating the reserved, allocated and obsolete units. After a statistical study of its inventory, Whirpool Europe has developed a theoretical model of 29 days.The primary goal of the ERP system is to increase the product availability by making the supply chain more visible and by integrating sales forecasting and inventory management. The company's targeted product availability is 92%. The projections assumes that ERP system and changes in the process, would enable the company to realize an increase in unit sales equal to 0.25% of the improvement in product availability.The ERP system is expected to reduce deck employees by 18% and finance employees by 15%. Substantial cost savings is expected from this.From all exhibits in the case we can sum up the total benefits (see Table III).The typical areas of spending are capital equipment, software licenses, charge for consultants, internal cost for engaging employees of the company and cost for production support.Whirlpool needs to spend 4.3 million USD in 1999 for capital equipment, 8.6 million USD in 2000 and 6.9 million USD in 2001 and 4.1 million USD in 2002. It would cost 600,000 and 300,000 USD for software licenses in 1999 and 2000 respectively.Implementation requires extensive employee training, testing and documentation of new business processes and of course the implementation of ERP software.From all exhibits in the case we can sum up the total costs (see Table IV).The cost of capital of the money which is going to be used for ERP implementation is 9%.To calculate the return on investment we need to calculate the net cash flow for every year starting from 1999. The production support expense and software license upgrade expense should be a constant expense for subsequent years (see Table V).Using DCF (Discounted Cash Flow) the IRR is 48%. There is DCF software which can calculate the value of IRR. It calculates for what interest rate we can discount the cash flow so that the net present value becomes zero. This IRR is nothing but ROI. This is higher than 9% which is the cost of capital used for ERP implementation Project. So we can say that the implementation project is feasible.The mathematical equation for this calculation is like this: Equation 1 We can have one close observations form this cash flow analysis and ROI. There is huge negative cash flow in 1st year and 2nd year for ERP implementation. It is extremely difficult for any company (turnover up to 50 crore INR/12 M USD) which is the segment of study of this paper) to have negative cash flow due to ERP implementation. Though the Harvard Business School case study is for a comparatively large company (whirlpool), the nature of benefits and costs and resultant cash flows are fundamentally similar for ERP implementation in any segment of industry. This particular case has been considered for calculation of cash flows in a real life ERP implementation scenario. It is established that negative cash flow is obvious in first two years. Segments upto 50 crore (12 M USD) can not afford that. DCF for a long time span (10 years) produced positive result in the case. For small segment DCF for 10 years is not relevant and hence again indicate the negative towards ERP implementation.\n4. Non-IT ERP solution: It is evident that there is no feasible ERP solution in this segment. Let us approach from process side by improving the standard operating procedure. Let us have an out of the box thinking which comprises of a few processes that possess the intrinsic value of leveraging their innovative capabilities (Prasad and Prasadb, 2002). It is the culture of the organization which should select the nature of IT solution and not anything else (Mortona, and Hu, 2007).The purpose of this study is to develop an optimal replenishment policy for deteriorating inventory in a single-level MRP system. The analysis in this study is based on the approaches by the Wagner Whitin Algorithm, Economic Order Quantities, Periodic Order Quantities, Part Period Balancing, McLaren's Order Moment and Linear Programming. In small segment it is easy to implement new standard operating procedure than new software. Readiness for change and perceived technological value is important for adopting any solution (Kwahka and Lee, 2008).In this paper we have approached like this way. In production side of MRP recommendations, production orders will be recommended by MRP based on scheduled ship dates of order and production capacity. When there is no full capacity utilization, production orders will be only based on scheduled ship dates of orders. There will be unutilized production capacity in the enterprise. In small and medium size industries this scenario is very common. In this paper we will try to develop an optimum production lot based on minimum set up cost and labor cost when production capacity is not fully utilized. In almost all ERP packages (which are very popular in the world) the MRP module does not optimize the production batch size when capacity is not fully utilized. MRP recommends production orders based on schedule ship date of orders and back calculation by lead-time. That is why an algorithm should be developed to calculate the optimum production lot based on minimum set up cost and labor cost when production capacity is not fully utilized. Based on this production plan we will apply Wagner Whitin algorithm to further optimize the procurement cost and subsequently the cost of running the enterprise. Both the cost of production and procurement will be minimized.All studies and analysis have been done in an engineering industry. The organization manufactures industrial valve. We are taking the example of a forged steel gate valve. The organization is a medium scale industry having staff strength of eighty people. The problem is with steadiness of getting orders. In most of time there is unutilized production capacity of this particular type of valve. The pattern of getting orders and contractual delivery dates are mentioned below. In our example the finished good is forged steel gate valve and raw material is steel spindle. Only one steel spindle is required to manufacture one forged steel gate valve (see Table VI).Our objective is to develop a methodology, which will derive the optimum batch production quantity. Instead of manufacturing 800, 700 and 750 in every month as per MRP recommendations, the organization should manufacture this forged steel gate valve in a batch to minimize the production cost. After that we will link the optimized procurement plan of spindle to this.We are minimizing cost at two levels. First we are minimizing the production cost by selecting an optimum batch and later we are choosing the least cost procurement batch using another algorithm. This two-stage minimization of cost will greatly reduce the operating cost of an enterprise.In production side we will optimize the monthly production quantity in two stages. First we will consider the machine constraints and later we will consider the finished good holding cost constraint.Since this solution is mainly targeting the SME, at the time of constraint formulation we should specifically take care of real life scenarios of SME. Because of uncertainty of independent demand, it is wise not to use all machineries everyday because in most of the cases the machine capacity remains unfulfilled. Let us first decide what is the optimum production quantity in a month.Optimization of production side\n5. What is new: The paper throws light on following things which are new ideas and can be effectively applied in small manufacturing segment:* A Non IT solution of Enterprise Resource Planning which is completely based on Standard Operating Procedure.* Targeting the segment for which the turnover is upto 50 crore INR (12 M USD). This is the first time serious thinking on this segment.* Integrating production optimization with Wagner Whitin Algorithm.* A real substitute of ERP system has been conceptualized.* From all angles of analysis there is strong signal that ERP implementation is not relevant in small segment.* A real emphasis has been given on improving SOPs instead of implementing any software solution in the segment.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to logically establish a Non - IT ERP solution for small and medium scale industry which is based on improvement of operational efficiencies and not on ERP implementation. Today's buzzword ERP is not the right fit everywhere. The medium and small segment has many failure stories and causes of dissatisfaction. There are two schools of thought prevailing in this segment of industry. One school of thought is for big investment for ERP. The other school of thought is not to invest in ERP. This paper further strengthens the second school of thought with valid logic and real life case examples.\nDesign/methodology/approach: - The design and approach is to logically establish the unacceptability of ERP solution in small segment through industry analysis and ROI analysis and to develop a model for improvement of supply chain efficiency using linear programming and Wagner Whitin algorithm.\nFindings: - A Non IT ERP solution for a segment of industry whose turnover is less than 2.5 M USD (in Indian perspective) which is completely based on standard operating procedure. The benefit of this model has been evaluated logically.\nOriginality/value: - A real substitute of ERP system has been conceptualized in this research paper through improvement in standard operating procedure.\n", "role": "assistant" } ]
Securitization and Italian banks' risk during the crisis
[ "Liquidity", "Credit risk", "Securitization", "Italian banks", "Financial crisis", "Ordered probit models", "G20", "G21", "G28", "C23", "G32", "G01" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: During the past two decades, structured finance and credit securitization has impressively grown all over the world and in Italy too. Theory and empirical literature have investigated many issues related to this topic, such as its capacity to contribute to achieve capital arbitrage (Calomiris and Mason, 2004) or its efficacy as a risk management technique (Cantor and Rouyer, 2000). Nonetheless, many questions are still open, especially referring to the Italian market, that have not yet been sufficiently studied. Furthermore, in recent years, the interest of academics, managers and regulators towards this financial technique has increased due to the 2007-2009 financial crisis, which resulted in the largest realization of bank risk since the Great Depression. The decimation of the market value of banking shares during this period was unprecedented: more than 3 trillion were erased from the market capitalization of banks in Europe and the USA. This corresponds to a decrease of 82 per cent in the stock market value of these banks between May 2007 and March 2009. All the previous considerations explain the potential interest for this topic.\n2. Model and data: To address if and how securitization affects banks' risk-taking and liquidity, by adopting an ordered probit model, we regress, in turn, changes in banks risk-taking and liquidity on a securitization dummy, a previous securitization dummy and on a set of control variables. Using an ordinal-dependent variable allow us to analyze the behavior of the bank i, in terms of probability, distinguishing between changes in bank behavior and relatively constant behavior. In particular, focusing on changes of 2 per cent, our dependent variables take on the following values:\n3. Variables: 3.1 Key independent variables: securitization dummy and previous securitization dummy\n4. Empirical analysis and results: Table II summarizes some relevant statistical information for the key independent variables, for the dependent variables, and for the control variables. While the regressions are run in changes, we report means of levels and changes for all the variables (to prevent extreme values from biasing the results of our study, without losing observations, variables are Winsorised at 5 per cent).\n5. Additional controls: In this section, to further verify our results, we implement a set of robustness checks concerning the model specification and the estimation method.\n6. Conclusions: This study examines the 2007-2009 financial crisis to analyze how securitization relates to the Italian bank risk profile, both in terms of credit and liquidity risks. Consistently with Santomero and Trester (1998), empirical findings generally show that securitization produces positive effects, both on the credit risk and the liquidity position of the originators. Moreover, size seems to play an important positive role in determining the credit risk of analyzed banks, while the larger ones appear more liquid. Unexpectedly, banks involved in M&As seem to be less liquid than those not involved in M&A operations.\n", "role": "user" }, { "content": "Purpose: The purpose of this study was to examine the 2007-2009 financial crisis to analyze how securitization relates to the Italian bank risk profile, both in terms of credit and liquidity risks.\nDesign/methodology/approach: To test our research hypotheses, we adopt ordered probit models, in which we regress the changes in credit risk and liquidity on a set of regressors, including two securitization dummy variables plus a vector of control variables.\nFindings: Our results show that the impact of securitization on the originators risk-taking is not uniform. When credit risk is considered, the securitization effects seem to be statistically significant only during the crisis period. However, when we turn to analyze the bank's liquidity position, our results show that securitization improves it both during the pre-crisis and the crisis years. Our results support the Basel III initiatives aimed to realize a better integration between the different types of risks (i.e. credit and liquidity risks).\nResearch limitations/implications: The major limitation of our study is related to the analyzed geographic area.\nPractical implications: First, our results support the Basel III initiatives aimed to realize a better integration between the different types of risks (i.e. credit and liquidity risks). In general, the broad policy implication of the paper is that in some contexts, such as the Italian market, securitization does not necessarily produce negative effects in terms of bank's risk.\nOriginality/value: This study contributes to the empirical literature on the effects of securitization for banks in several ways. First, we consider the complexity of the bank's risk profile; second, despite the importance of the Italian securitization market, there is a research void on it. Furthermore, unlike previous studies, our analysis covers the period 2000-2009, including the financial crisis years. Finally, to our knowledge, our methodology (ordered probit models) has not been used in the past in this context.\n", "role": "assistant" } ]
Laboratory errors and patient safety
[ "Private healthcare", "Patient safety", "Medical records", "Health and safety", "Service quality" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Health hazards and laboratory errors\nPhases of laboratory procedures: Laboratory services include the following specialties: clinical chemistry; hematology; microbiology; pathology; and transfusion medicine. Laboratory testing is initiated by a physician requesting a test to reach a diagnosis. Once the laboratory test is ordered, the patient is identified and the specimen collected, transported and prepared for analysis. After analysis, the results are interpreted and reported. Action is taken based on test result interpretation. The laboratory cycle process includes three phases: pre-analytic; analytic (automated or manual techniques); and post-analytic phase. A search was conducted using MEDLINE covering the last eight years to identify analytical error frequency and type data (Bonini et al., 2002). There was considerable concordance on error distribution throughout the laboratory working process: most occurred in the pre- or post-analytical phases, whereas a minority (13-32 percent according to the studies) occurred in the analytical portion. Error frequency was related to how they were identified: when a careful analysis process was performed. Substantially more errors were discovered than when studies relied on complaints or near accident reports (Bonini et al., 2002). Clinical laboratory staffs have long focussed their attention on quality control methods and quality assessment programs dealing with analytical aspects. These efforts significantly reduced errors caused by analytical problems over time, but there is evidence, particularly for immunoassays, that interference may have a serious impact on patient outcomes (Plebani, 2007). However, growing evidence suggests that clinical laboratory quality cannot be assured by merely focussing on purely analytical aspects. The more recent surveys on laboratory errors concluded that in laboratory testing, most errors pre-analytical (46-68.2 percent of total errors), while high-error rates (18.5-47 percent) have also been found in the post-analytical phase (Plebani, 2007).\nMain laboratory errors encountered in three testing phases: Pre-analytic mistakes usually result from human errors arising at any step before an analysis occurs including:\nProcedure: This study was performed in the Department of Laboratory Medicine staff in one Egyptian private hospital during May 2008. The laboratory service provides tests for clinical chemistry, hematology, coagulation, and highly specialized care units, including organ transplantation. Laboratory errors were recorded during May 2008 and analyzed (using simple percent distributions) in relation to: laboratory phase; and error type and its impact on patient health.\nResults: Our results are summarized in Tables I-IV. From 1,600 laboratory tests 14 (representing 0.875 percent of all procedures) were noted. Their distribution, by analytic phase, are summarized in Table I.\nConclusion and recommendations: Laboratory data are extensively used in medical practice. Laboratories provide about 70-80 percent of information in the healthcare system (Silverstein, 2003). Consequently, laboratory errors have an impact on patient safety. In Arabic countries, there are no accurate statistics about medical errors and their impact on patient health or survival. Medical errors are underestimated, since most are not recorded (Khashaba Ola, 2007). Medical record deficiencies make it difficult to get measurement reliability data. Also, few studies have directly evaluated the relationship between medical errors and adverse events. Error reporting problems result from data heterogeneity, few benchmarks and unsatisfactory definitions about maximum allowable error rate and error classification.\n", "role": "user" }, { "content": "Purpose: Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors.\nDesign/methodology/approach: Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health.\nFindings: Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis.\nPractical implications: The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures.\nOriginality/value: Original being the first data published from Arabic countries that evaluated the encountered laboratory errors and launch the great need for universal standardization and bench marking measures to control the laboratory work.\n", "role": "assistant" } ]
Corporate social responsibility and corporate financial performance in China: an empirical research from Chinese firms
[ "Stakeholders", "Corporate social responsibility", "Corporate financial performance", "Social responsibility" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: It is universally acknowledged that it was Shelton who first mentioned the concept of corporate social responsibility (CSR) in 1924. In 1930s, the question that who was the fiduciary of the corporate managers aroused the famous debate between Professor Berle and Dodd. Berle believed that corporate managers could only be the trustees of shareholders. However, Dodd conceived company was an economic institution with the function of profit making and social service, so managers were agents of employees, consumers and the public in addition to shareholders. The debate between Berle and Dodd has greatly influenced the research of CSR.Corporate social responsibility has become a hot topic of Western management community after several decades of arguments. Although there are still different schools of thought endlessly arguing about what CSR is, why and how to take CSR, the academic and business community begin to look further into CSR empirically today. One of the hot topics is whether a company will improve its financial performance or not if it behaves more social responsible, i.e. what is the relationship between CSR and its financial performance. This problem was expounded and discussed widely by scholars in the past three decades, but failed to reach agreement. It is a pity that there are few empirical literatures about the relationship between Chinese companies' CSR and financial performance. This partly explains the reason why some companies in China hesitate in taking social responsibility since there lacks clear theory guidance. According to the investigation of Chinese companies, this paper tries to use stakeholder theory to answer this question, and hence reassure companies to devote themselves to undertaking CSR more actively.\n2. Literature review: 2.1 CSR and financial performance:conflict or coincident?\n3. Research framework and hypothesis: 3.1 Research framework\n4. Results: 4.1 Factor analysis results\n5. Conclusions and discussions: On the whole, considering time-lag and cause and effect factors, we explore the relationships between CSR and CFP with data in 2007 and 2008 from investigation. The data analysis indicates that there exists significant and reciprocal relationship between homophonous CSR and CFP in Chinese enterprises. In addition, the postponement influence between CSR and CFP is significant as well. While the variances between CSR and CFP are significant and positive. H1 to H5 are basically tested and verified.Basically, many Chinese companies are hesitant and worry too much in taking social responsibilities as they always fear actively implementing social responsibilities will not yield good results. But our research implies that, by taking social responsibilities, companies can not only improve this period's financial performance but also financial performance of the next period through more positive action of CSR. The reasons are as follows: although it requires companies to input resources in taking social responsibilities, it satisfies the benefit requirements of every stakeholder. Thus, internal stakeholders will dedicate more to make contributions for companies and external stakeholders will develop good impression toward companies. Besides, consumers will purchase more goods or service. Therefore, the companies' benefits can be improved. At the same time, companies with good financial performance are more likely to obtain scarce resources which can provide more opportunities for companies to undertake social responsibilities and improve relationships between companies and stakeholders, thus generating better financial performance.\n", "role": "user" }, { "content": "Purpose: - Although the relationship between corporate social responsibility (CSR) and financial performance has become a hot topic of the Western management community after several decades of arguments, there is still little empirical literature about the relationship between Chinese companies' CSR and financial performance. According to the investigation of Chinese companies, this paper aims to use stakeholder theory to answer this question.\nDesign/methodology/approach: - A theoretical framework is proposed based on the stakeholder theory by defining nine kinds of stakeholders and viewing the companies taking CSR as giving responses to the interest requirement of these stakeholders. Some agent variables are also set to depict CFP. Subsequently, this paper uses the data collected in 2007 and 2008 from Chinese firms to explore the relationship between CSR and corporate financial performance (CFP) empirically.\nFindings: - The results show that companies' social responsibility activity can improve their financial performances of the current year, have significant effects on their financial performances of the next year, and vice versa. The variation of CSR and financial performance can also significantly influence each other.\nOriginality/value: - This research integrates the factors of time delay and cause-effect with the relationship of CSR and CFP, and then provides theory support for companies taking CSR.\n", "role": "assistant" } ]
Knowledge creation and play: A phenomenological study within a multi-professional and multi-organizational community
[ "Interaction", "Knowledge creation", "Knowledge management", "Knowledge processes", "Communication", "Phenomenology" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The research into knowledge creation (KC), within the multidisciplinary field of knowledge management (KM), has focused largely on how to foster an atmosphere for open and creative interaction in organizations. This is based on the notion that new knowledge is created in interaction, when people with different backgrounds and knowledge bases encounter each other (Tsoukas, 2009). A review of the KC research indicates a striving toward defining a \"creative\" state of being or atmosphere which promotes creative interaction (Suorsa, 2015). Studies have focused on the circumstances which promote employees to engage in open interaction. Additionally, personal relationships, as well as the behavior and previous experiences have been shown to affect interaction.\n2. Theoretical background: Studies on KC in organizations have mostly relied on the models of Nonaka and his colleagues, based on the idea of tacit and implicit knowledge converted into explicit knowledge (e.g. Nonaka, 1994; Nonaka and Takeuchi, 1995). However, since the 1990s a number of studies have challenged these views. The interest has shifted to the event of knowing and interpersonal relationships in working communities (Cook and Brown, 1999; Orlikowski, 2002; Kupers, 2005). The social and contextual aspects of knowledge processes have been studied in LIS, and the importance of collaboration has been emphasized (see e.g. Ingwersen and Jarvelin, 2005). Collaboration requires interaction, which is a necessity in creating knowledge (Tsoukas, 2009; Morner and von Krogh, 2009; Suorsa and Huotari, 2014a). Many of the studies examining KC interaction are based on the notion that during interaction people share their experiences and create new knowledge based on their previous experiences (Cook and Brown, 1999; Tsoukas, 2009; Morner and von Krogh, 2009; see also Suorsa and Huotari, 2014a; Suorsa, 2015). Information gained from one person is always changed as the other person interprets it from his or her own perspective in a certain situation (Cook and Brown, 1999, p. 393). Human being's own capacity to be reflective (Tsoukas, 2009; Topp, 2000), critical (Mitchell and Nicholas, 2006; Mitchell et al., 2009), and open to new suggestions (von Krogh, 1998) in interaction have an impact on KC. Furthermore, trustful and open relationships between the employees affect the nature of interaction (Cross et al., 2001; Mitchell et al., 2009). Additionally, contextual elements, such as safe and flexible circumstances, promote knowledge-creating interaction (Kelly, 2007; Richtner and Ahlstrom, 2010; see also Suorsa, 2015).\n3. Methodology and data: The phenomenological viewpoint guided the selection of the empirical research methods (see Suorsa and Huotari, 2016; Moustakas, 1994; Lopez and Willis, 2004). The phenomenon of KC was understood both as an event and an experience. The primary nature of the present moment, as well as the subjective view, was highlighted (Wachtenhauser, 1986, p. 36). In addition, Gadamer's emphasis on language and interpersonal conversations (Linge, 2008) affected the way of conducting the empirical study, as the methods used take into account the dialogical nature of understanding. Thus, the study was conducted in the light of an ethnographical approach (Smith, 2001; Maso, 2001) in a community of librarians and teachers participating in a program called the Joy of Reading (JOR) in Finland. In this program, 30 school and public library pairs piloted their ideas for promoting literacy within their communities. This study examines KC in one of these pilot pairs, consisting of the regional library and 7 primary and secondary schools in a mid-sized city in Finland. In this study, this pilot pair as a whole is called JOR community. The eight different organizations forming the JOR community are called JOR sub-communities (see Figure 2).\n4. Results: In the JOR community, the participants developed several means to interact in quite challenging circumstances: the community interacted from eight different locations and did not have joint arenas of interaction in their everyday life (see also Suorsa and Huotari, 2014b). In addition, only an e-mail list was used to inform (O1, Tea2, Tea1). Thus, although it was difficult to arrange time for face-to-face meetings, they were the main form of interaction (Tea6, Tea8, Tea1, Tea7). Overall, the participants gave rich descriptions of the meaningful experiences of interaction in the JOR community. Moreover, they described several cases, where they felt that they had created something new together. The analysis of the video materials shows 26 sequences in total where the participants were involved in an enthusiastic discussion, which have some features of play. The extracts of the discussions in this section have been chosen to exemplify the key phenomena detected from these sequences. In the extracts, the same codes are used concerning the speakers as in the interviews (see Tables I and II). In the cases where the speakers have not been interviewed, they are referred to by using same logic (e.g. Tea10).\n5. Discussion: The aim of this research has been to examine: how knowledge-creating interaction can be studied with the conceptions of Gadamerian play and the phenomenological idea of a human being, what kinds of phenomena are revealed by using this conceptualisation in studying the JOR community, and what other dimensions were crucial to playful interaction in the empirical study and how should the suggested framework be developed based on its empirical testing.\n6. Conclusions: The phenomenological approach highlighting the temporality of the participants' being and the interactive events as part of the everyday life of the participants can shed light on the knowledge-creating process. With the methods used in this study, the focus is shifted away from the individuals, as the interaction is viewed as a joint accomplishment (see McKenzie, 2009; Day, 2011). Gadamer's idea of play emphasizes the shared event of being together, where the individual's experiences as such are not the core, but the event as a whole is. The approach used in this study provides opportunities for developing further methods for analyzing interaction as an event by using video recording in investigating phenomena related to information and knowledge processes (see Budd, 2006; Tsoukas, 2009; McKenzie, 2009). Moreover, with the phenomenological approach, the emphasis can be shifted toward future possibilities which promote KC without forgetting the significance of past experiences in the process. The framework suggested in this study gives also possibilities to understand how working life should be developed in order to foster KC in organizations, as this study emphasizes the meaning of having time for playful interaction, as well as the meaning of having courage to loose oneself to play by giving up individual goals and needs in the discussion.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to present a framework for empirically studying knowledge creation (KC) with phenomenological approach and propose that understanding interaction as play conceptualized by Hans-Georg Gadamer allows examining KC starting from the idea of a human being interacting in the events of co-creation. The presented framework is used to examine KC in a community of librarians and teachers collaborating to promote children's joy of reading.\nDesign/methodology/approach: An ethnographic approach is applied to investigate knowledge-creating interaction in a working community. The triangulated data consist of ethnographic observations and video recordings of the community's gatherings, its members' interviews and produced documents.\nFindings: The phenomenological conceptions of temporality of a human being and play are suitable for understanding being in the knowledge-creating interaction, as they give means to understand the meaningfulness of the past experiences, but promote an open attitude toward the future possibilities in a way which promotes KC. Studying interactive events allows understanding how KC can be examined as a collective accomplishment. The playful mode of being in the event was seen as a way to use the limited time available for interaction effectively.\nResearch limitations/implications: The empirical study was conducted in one community, and further research is needed to test the developed approach in other contexts.\nPractical implications: The results may be utilized to develop organizational circumstances, which promote KC by acknowledging the meaningfulness of interaction.\nOriginality/value: The study presents a novel way to conceptualize and examine KC as an experience and an event with phenomenological approach.\n", "role": "assistant" } ]
Teacher-related factors in assimilation of technological change in schools: The case of an Arab school in Israel
[ "Collaboration", "Israel", "Arab education system", "Technological change", "Change management", "Teachers’ factors" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: It is evident that Middle Eastern countries are striving to improve their education systems to meet twenty-first century demands. Thus, Israel's education system also recognizes the benefits of introducing innovative teaching and learning strategies based on computer technology into its schools (Abdul Razzak, 2013; Abu Al-Zur and Qablan 2011; Al Sharija et al., 2012). This research was motivated by the need of educational administrators and policymakers in Israel, and in particular those in the Arab education system, to improve their understanding of the implementation of technological change in the educational context. It also aimed to identify possible barriers, which might hinder the development of improved technology-assisted teaching performance and productivity and the enhancement of related topics connected with teaching and learning at schools.\n2. Theoretical framework: 2.1 Assimilation of a managerial technological change at schools\n3. The study objectives: This paper explores teachers' attitudes toward the implementation of new computer technology to improve teaching and learning products at a private Arab school in Israel. Specifically, the aim was to individuate teachers' factors associated with higher productivity of this technological change.\n4. Methodology: 4.1 Population and sample\n5. Findings: 5.1 Descriptive statistics relating to the research sample\n6. Discussion, conclusions, and directions for future research: 6.1 Discussion\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to explore teachers' attitudes toward the implementation of new computer technology to improve teaching and learning products at a private Arab school in Israel. Specifically, the aim was to individuate teachers' factors associated with higher productivity of this technological change.\nDesign/methodology/approach: The research used the mixed-methods approach to enrich the data derived from a case study. It employed a questionnaire of 81 items administered among 55 teachers and in-depth interviews with both teachers and senior management team members at one private Arab school in Israel.\nFindings: Results show some features that characterize the teachers who rated the productivity of this new management change highly. Those teachers tend to have high expectations of the change and to view the change implementation process favorably. Teachers with such characteristics tend to be female teachers. The influence of teachers' education was latent. Their teaching experience influenced only their expectations and views of the process, but not their perceptions of the change products.\nResearch limitations/implications: The paper focused only on one private school known for its excellence and teaching staff, and thus may not apply to the entire Arab education system in Israel. Nevertheless, the findings indicate how to increase teaching productivity when planning the management of technological change for increased teaching benefits in schools with similar characteristics.\nOriginality/value: This paper explored a case in which technological change was implemented through a careful process of management planning, in order to facilitate the construction of a model of indicators to facilitate change.\n", "role": "assistant" } ]
Outsourcing/offshoring insights: going beyond reshoring to rightshoring
[ "Outsourcing", "Governance", "Offshoring", "Location", "Insourcing", "Make or buy", "Reshoring", "Rightshoring" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The last decades have seen manufacturing and services offshored from industrialized countries to low labor cost regions like Asia. These geographic locational decisions were often made based on low price and ignored other important criteria such as additional cost measures and risk in the decision-making process (Tate et al., 2009; Bhatnagar et al., 2003). Because of increasing wages in typical low cost countries and other added complexity such as economic and supply chain risk, many well-known global organizations as well as smaller companies (Gray et al., 2013) are re-evaluating their supply chain designs (Tate et al., 2014; Christopher and Holweg, 2011). Also, continuously rising automation and robotization (\"Internet of Things\") with low or even without human intervention in industrialized countries (OECD, 2016) is driving many organizations to re-evaluate their \"shoring\" decisions from both a manufacturing and a supply perspective. Re-evaluations of \"shoring\" decisions inherently create a need to re-examine theoretical and academic contributions to this rapidly changing phenomenon. As special issue editors, the intent of this special issue was to nurture and develop manuscripts that add to the exciting and dynamic body of knowledge on \"rightshoring.\"\n", "role": "user" }, { "content": "Purpose: The last decades have seen manufacturing and services offshoring on the rise, often motivated by low prices and without consideration of other important criteria such as additional cost measures and risk. With wages in former low-cost countries and automation/robotization increasing, these decisions are increasingly contested. Re-evaluations of \"shoring\" decisions inherently create a need to re-examine theoretical and academic contributions to this rapidly changing phenomenon. Therefore, the special issue sought manuscripts that added to the exciting and dynamic body of knowledge on \"rightshoring\". The paper aims to discuss this issue.\nDesign/methodology/approach: The paper starts out by delimiting outsourcing/insourcing and offshoring/reshoring as part of a conceptual \"rightshoring\" framework to establish a common terminology and context for the insights gathered in the special issue. It illustrates that \"shoring\" options can be classified along geographical and governance dimensions.\nFindings: Both the geographical and governance dimensions are part of the rightshoring decision which is an important conceptual foundation for this special issue, as it invited insightful pieces on all of these phenomena (e.g. outsourcing, insourcing, offshoring, reshoring), acknowledging that these decisions are embedded in the same context - firms making governance and location decisions. Therefore, papers 1-4 primarily focus on offshoring, whereas paper 5 focuses on insourcing and paper 6 on reshoring. Their main findings are summarized in Table II.\nResearch limitations/implications: Suggestions for future research out of the six papers are summarized in Table III. There is ample opportunity to further shed light on these suggestions as well as to cover parts of the \"rightshoring\" framework presented, that remain less covered here (e.g. insourcing and/or reshoring).\nPractical implications: The array of potential \"rightshoring\" options fosters clarity about the phenomena studied and their implications. The main practical implications of the six papers are summarized in Table II.\nOriginality/value: The overall conceptual framework highlights the positioning of the final papers included into the special issue and provides guidance to scholars and managers alike.\n", "role": "assistant" } ]
A new taxonomy for comparing intrusion detection systems
[ "Data security", "Computer crime" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Intrusion detection has a long history, dating back to the work of Anderson (1980). Since then, various discrimination techniques, ranging from support vector machines, through to data mining and expert systems, have been used as part of the detection engine (Mukkamala and Sung, 2003). Many complete systems have been constructed and operated on live computer systems (see, for example, Allen et al., 2005). However, despite over 25 years of research, the topic is still active, in part due to the rapid development of information processing systems and the consequent discovery of new vulnerabilities, but also due to fundamental difficulties in achieving an accurate declaration of an intrusion. Intrusion systems are noted for high false alarm rates and considerable research effort is still concentrated on finding effective intrusion, non-intrusion discriminants.This paper proposes a new taxonomy, which aims to improve the comparison of intrusion systems. Taxonomies are an important aspect of the analysis of systems. They can act as a description, providing order to the subject and an explanation of observed phenomena. More importantly, they can provide insights through the identification of gaps. Such insights often identify new areas of research and this was the motivation for developing the taxonomy described in this paper.The proposed taxonomy considers the different type of outputs that can be produced by intrusion systems, along with the type of information used to determine the intrusion, as the basis for their comparison. A graphical combination of these parameters is proposed against which intrusion systems can be qualitatively and quantitatively compared.First, the background to the problem of comparing intrusion detection systems is presented in terms of other taxonomies. The difficulties of comparing an anomaly-based system with a network intrusion system using signatures are discussed. Then the new taxonomy is presented in terms of two parameters, namely the output type and the data scale over which they operate. The bulk of this paper addresses the application of the taxonomy by first introducing the concept of an intrusion matrix, to combine the two parameters, and then the concept of intrusion footprints to plot the capabilities of individual intrusion systems. The use of the taxonomy to compare different systems is then discussed before the potential for further theoretical analysis is described.\nBackground: A number of taxonomies for intrusion detection have already been proposed. One of the earliest was undertaken by Debar et al. (1999)) and classified intrusion systems according to their detection method, behaviour on detection, audit source location, or usage frequency. This was later extended to include the detection paradigm, as either state- or transition-based (Debar et al., 2000). Axelsson (2000) offered an alternative taxonomy in terms of the detection principle and operational aspects, such as whether operation is continuous or in batch mode. More recently taxonomies have been developed that classify intrusion systems according to the attack stage they can declare intrusions, such as pre-attack, real-time or post attack (Lukatsky, 2002).Each of these taxonomies provides insight into the operation of intrusion systems and is a useful framework for identifying new research opportunities. However, they are not a good basis for their comparison as they use the internal properties of such systems for classification. A taxonomy based on the applicability of intrusion systems is a more fundamental comparison approach as it describes their use, rather than the details of their implementation.Consider, for example, two network intrusion systems. System A is misuse-based whilst System B is anomaly-based. During a series of intrusion events both these systems will indicate the intrusion state of the network segment they are monitoring, possibly to different degrees of accuracy (that is, their detection and false alarm rates). Much work has been published on the quantitative comparison of such systems (see, for example, Abouzakhar and Manson, 2004), using analysis techniques such as receiver operating characteristic (ROC) or lift curves. However, in addition to the presence of an intrusion, System A will often indicate the type of attack and the exploit being used, on the basis of the specific signatures that are triggered. Comparing System A with System B via a ROC curve or confusion matrix will not include this important property of System A and thus is not a fair comparison method.Consider also the use of the output from these two systems. If both systems are providing alerts to network support staff the actions that are likely to be taken are different. System A will identify the network peers involved in the suspected intrusion behaviour as well as the nature of the attack, allowing support staff to take specific action quickly. Support staff using System B may need to undertake further investigations before the information necessary to stop the intrusion behaviour is derived. In summary, each of these approaches to intrusion detection makes differing demands on the systems that use the information they provide and therefore comparison techniques should include this in their assessment.\nThe new intrusion taxonomy: The taxonomy proposed in this paper is inspired by the work of Johnson (1958) in the formation of images. He studied the ability of human operators to find and correctly classify objects within complex images. The objects were relatively small and thus the a priori probability that a specific area of the image contained an object was very low, a situation analogous to intrusion events within a background of normal network or host activity (Axelsson, 1999). Johnson defined the following types of operator tasks (amongst others):* Detection - the ability to say that something of interest is present in an image;* Recognition - the ability to determine the class of object present, such as a car or aircraft.* Identification - the ability to determine the type of object present, such as the make of car or the type of aircraft.The most important aspect of Johnson's work was the definition of minimum criteria necessary for successful completion of the above tasks. Using similar task definitions as a starting point, this study proposes to divide the output of intrusion systems into one of five categories, with the first three loosely in line with Johnson, as follows:1. Detection - in which the system outputs an indication of a state change within a network or host. There is no classification of the nature of the change, apart for the assumption that this indicates the occurrence of a possible intrusion. The principal use of such systems is for data rate reduction so that other systems (either automated or human) can investigate further.2. Recognition - in which the intrusion systems are capable of declaring the type of attack, such as Distributed Denial of Service (DDoS), reconnaissance, or User to Root (U2R).3. Identification - in which the system is capable of declaring the exploits used to achieve the intrusion, such as buffer overflow or an application-specific vulnerability.4. Confirmation - in which the attack plan is deduced, allowing attack-specific countermeasures to be deployed rather than coarse measures, such as disconnection of the Internet access or isolation of key business servers.5. Prosecution - in which evidential quality data is generated identifying the originator of the intrusion.As an example of the use of this taxonomy consider a simple anomaly intrusion system comparing the utilised network bandwidth with historical values. Such a system would be categorised as an intrusion detection system. It would be able to declare that something unusual is happening within the network but declaring with that the anomaly was caused by an intruder is not likely to be achievable to an arbitrarily high accuracy.As another example consider a Snort intrusion system operating on a single network segment (Roesch, 1999). When a rule is triggered and an alert declared, there is considerable attack-related information available. Often, rules are created to alert when the signatures of specific attacks are present. Thus, when such a rule has been triggered, the intrusion system can identify the exploit being used, as well as the network peers involved. Within the taxonomy proposed here Snort is acting as an intrusion identification system.In addition to considering the output from an intrusion system, further insight can be achieved from an analysis of the data scale over which the system is operating. In modern computer systems four data scales can be considered:1. File - monitoring the status of individual files for unauthorised access or change.2. Host -monitoring the applications running on and the behaviour of an individual host.3. Network - monitoring the packets exchanged between hosts, servers and other network devices to assert the presence of an intrusion.4. Enterprise - monitoring traffic originating from trusted sources of an organisation, which operate in the presence of other, less trusted data sources.The File, Host and Network data scales have been used in other studies (see, for example, Bace and Mell, 2001). The separation of Network data scale into two sections, as introduced by this paper, is believed to be a novel concept. The principal difference between the Network and Enterprise data scales is the mixing of trusted and untrusted data streams within the same network segment. This is most often encountered in virtual private networks (VPN) between an office location of an organisation and its remote staff or trusted partners, via the Internet. VPNs are separated from the untrusted data streams using encryption schemes and well-known protocols. However, this separation may become subject to the same technology, policy or configuration vulnerabilities as other parts of the information processing system. Therefore, it is likely that security staff will want to know when their VPN communications are subject to intrusion attempts. Intrusion systems therefore need to extend their data scale applicability to include the Enterprise. This is a technically challenging problem.\nThe application of the taxonomy: This taxonomy can be applied in a number of ways. The remainder of this paper will examine its use to create an intrusion footprint on a grid or matrix formed from the output type and data scale elements of the taxonomy. The use of this footprint for comparison of systems will then be shown.Intrusion matrix\nRelationship with other definitions of intrusion: One of the earliest definitions of intrusion was from Amoroso. He defined intrusion detection as \"the process of identifying and responding to malicious activity targeted at computing and networking resources\" (Amoroso, 1998). In the same year Ptacek and Newsham (1998)defined intrusion as \"unauthorized usage of or misuse of a computer system whilst Alessandri et al. (2001) defined intrusion as \"a malicious activity threatening the security policy that leads to a security failure, that is to a security policy violation\". More recently many researchers have used the definition of Bace and Mell (2001) in which intrusion is defined as \"attempts to compromise the confidentiality, integrity, availability, or to bypass the security mechanisms of a computer or network\".For the remainder of this paper we will use a simple definition based on Alessandri, without the restriction of malicious intent. Therefore we consider intrusion to be defined as \"an activity that leads to the violation of the security policy of a computer system\". Further insight can be gained by considering the relationship of this definition with the definitions of detection, recognition and identification presented earlier.In this context, intrusion detection can be seen as the declaration that the security policy has been violated, but the specific clause that has been violated is not declared. Intrusion recognition systems are able to declare which clauses or subsets of clauses have been violated. Intrusion identification systems are able to declare which clauses or subsets have been violated, as well as declaring the way in which they have been violated.The above discussion can be used as the basis of a mathematical model of the intrusion declaration process, potentially allowing the theoretical limits to be determined in the same manner as Johnson's work for imaging systems. Also the inclusion of AVS within this taxonomy opens the challenging and interesting option of building on the theoretical work already published in this area. The work of Cohen (1987) has already established theoretical limits on the detectability of viruses, proving that no algorithm can perfectly detect all possible viruses. More recently Li et al. (2005) have proposed a theoretical basis for intrusion, but this work has yet to reveal any useful conclusions. It is hoped that this taxonomy will build on this theoretical basis and lead to a better understanding of the limits of performance for intrusion systems, as well as providing an improved framework for their comparison.\nConclusions and further work: This paper proposed a novel taxonomy for intrusion systems, based on the type of information the system is capable of providing, as well as the data scale over which it operates. It allows a graphical comparison of systems to be undertaken, using their footprint on an intrusion matrix constructed from the output capabilities and data scale. A more precise, quantitative comparison can then be undertaken at overlapping elements within the footprint, using metrics specific to the type of output. Individual ROC curves can be produced and compared for the intrusion systems operating as detection, recognition or identification systems, depending on their overlap within the intrusion matrix. Further work is necessary to determine how to combine the many differing performance metrics that can be generated for this approach, into a single definition of what constitutes the \"best\" intrusion system for a given problem.The challenging areas of the intrusion matrix have been identified as the Enterprise data scale, for all intrusion output capabilities and the Prosecution output capability, for all data scales. Although careful design techniques can allow Prosecution systems to be proposed and constructed, the techniques available at the Enterprise data scale are sparse and present significant technical and legal difficulties.Finally, further work is still to be undertaken in establishing a useful theoretical framework for intrusion systems. A variety of definitions for intrusion are used by the many researchers active in this field. A logical start is therefore to agree on a single definition and build a theory around it. The expansion of what was thought of as \"detection\" into three separate and distinct activities presented in this paper may provide further impetus for this goal.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to propose a new taxonomy for intrusion detection systems as a way of generating further research topics focussed on improving intrusion system performance.\nDesign/methodology/approach: - The paper shows that intrusion systems are characterised by the type of output they are capable of producing, such as intrusion/non-intrusion declarations, through to intrusion plan determination. The output type is combined with the data scale used to undertake the intrusion determination, to produce a two-dimensional intrusion matrix.\nFindings: - The paper finds that different approaches to intrusion detection can produce different footprints on the intrusion matrix. Qualitative comparison of systems can be undertaken by examining the area covered within the footprint and the footprint overlap between systems. Quantitative comparison can be achieved in the areas of overlap.\nResearch limitations/implications: - The paper shows that the comparison of systems based on their footprint on the intrusion matrix may allow a deeper understanding of the limits of performance to be developed. The separation of what was previously understood as \"detection\" into the three areas of Detection, Recognition and Identification may provide further impetus for the development of a theoretical framework for intrusion systems.\nPractical implications: - The paper shows that the intrusion matrix can be divided into areas in which the achievement of arbitrarily high performance is relatively easily achievable. Other areas within the matrix, such as the Prosecution and Enterprise regions, present significant practical difficulties and therefore are opportunities for further research.\nOriginality/value: - The use of a taxonomy based on the type of output produced by an intrusion system is new to this paper, as is the combination with data scale to produce an intrusion matrix. The recognition that the network data scale should also be split to differentiate trusted and untrusted networks is new and presents challenging opportunities for further research topics.\n", "role": "assistant" } ]
Open source journals of library and information science: a bibliometric study
[ "Open access journals", "Bibliometrics", "Library & information science" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Bibliometric analysis or citation analysis is an important area of study in the field of Library & Information Science. It has purposeful application in measuring the coverage and quality of various types of resources and thus helps in collection building policies of the libraries, more particularly in the selection process. \"Although the first recorded study on bibliometric topic was made in 1917 by Cole and Eales (1917) with the title \"Statistical analysis of literature of history of comparative anatomy\", \"Prtichard pioneered the term \"Bibliometrics\" in 1969 to mean \"the application of mathematics and statistical methods to books and other media of communications\" (Jena et al., 2012). Bibliometric techniques using references made to other documents can be applied to establish statistical models of scholarly communication flow. For example, citations can be used to map relationships between documents, between journals, or other channels of scholarly communications. In the opinion of Desai (2003), from citation analysis one can learn which scholars, from which disciplines, cite which articles; which journals are cited more often; and which disciplines cite the journals of other disciplines, The results of citation analysis are used for many purposes, for example, to determine the impact of specific articles or journals on subsequent research and to document the interdisciplinary applicability of various journals.\n2. Objectives of the study: Looking into this emerging significance of open source journals, and increased publication activity on open source journals in library & information science subjects, it was decided to carry out a bibliometric analysis of scientific output in this area. The major aims and objectives are:\n3. Literature review: Amwar and Seed (1999) in their study \"Pakistani librarians as authors: a bibliometric study of citations in LISA-PLUS\" have presented quantitative analysis of 251 items contributed by 64 Pakistani LIS professionals culled from LISA-PLUS. Volume of contribution, sources where published, periodic distribution, type of publication, and topical coverage, are dealt with in the analysis. They found that journal literature, which represents 90.4 percent of the citations, is published in 43 periodicals, most of which are from the US. However, more than half of the journal literature comes from one national journal. Gian Singh et al. (2007) in their study, analyzed 1,000 articles for the period 1998-2004, which were collected from LISA Plus. They found that:\n4. Methodology: The present study focused on the top ten LIS open source journals of the year 2011, i.e. the journals which are indexed in SCOPUS Social Science and Humanities journal list 2011, and which are only available online with open access. There are many prominent journals in LIS and its subfields; evaluating all of them was beyond the scope of this explorative study. This paper focuses on a subset of highly visible open source journals during 2011. While authors are giving more preference to commercial journals for their research publication due to its high impact factor, open source journals are still in demand within the research community. This paper helps to identify the preferences of open source journals among the LIS research community.\n5. Analysis and discussion: Annual distribution of papers\n6. Conclusion: The present study evaluated the top ten open access journals of Library & Information Science through bibliometric measures. The findings of the study indicates that a good number of papers have been published in these ten open access journals. The papers are mostly contributed by a single author. The degree of collaboration of authors also seems to be encouraging. The number of citations of these papers is also good. Since most of the contributors belong to the developed countries, it indicates that open access journals are yet to be popular in developing and under developed countries. However, it is expected that the open access journals need to maintain their quality through proper editorial policies, and strive to solicit more quality papers, so that they will able to cater to the needs of library & information science professional, teachers researchers and students and will enrich the library & information science literature.\n", "role": "user" }, { "content": "Purpose: The aim of this paper is to investigate the scholarly communications in open access journals of library & information science and to study the key dimensions of these publications.\nDesign/methodology/approach: For the analysis of the study, top ten open access journals of 2011, which were indexed in SCOPUS, have been selected. The authors employed necessary bibliometric measures to analyze different publication parameters.\nFindings: It is found that the contribution of articles in these top ten open access journals in 2011 is good, i.e. 37.3 percent. Single authored papers are found to be the highest (40.48 percent), followed by two-authored and then three-authored papers. The degree of collaboration is found to be between 0.33 and 0.8. The numbers of citations used in the paper are also good (average 21.48 percent paper). In regards to ranking of country productivity, the USA topped the list.\nResearch limitations/implications: This paper focuses on the publication traits of top ten open access journals of library & information science of the year 2011. A total of 373 papers from these ten journals are analyzed. Further studies can include more open access journals of this field for a period of more than one year.\nPractical implications: Scholars can benefit from insights into the scholarly contributions of top ten open access journals of 2011 in the field of library & information science.\nOriginality/value: This paper provides valuable insights into the nature of academic publishing of open access journals of library & information science. It can help the researchers, professionals, teachers and students to understand the top open access journals, valuable contributions, highly cited journals, country productivity, and other parameters.\n", "role": "assistant" } ]
Application of data mining for spare parts information in maintenance schedule: a case study
[ "Data mining", "Case study", "Maintenance management", "Association rule", "Sequential pattern" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: On many occasions, the sudden failure of one part of an equipment can lead to subsequent failures of other parts in the system. Ignoring this failure information may lead to an increase in downtime and the cost of next maintenance activity. One can prevent downtime by just replacing the failed part at the correct time. However, by acquiring this knowledge, a maintenance manager can take necessary actions to prevent the occurrence of similar issues in the future by maintaining a proper corrective maintenance schedule. It is observed that many industries face problems regarding the scheduling and predicting the sequence of maintenance activities along with the usage of parts during the corrective maintenance of their equipment. For example, in the case of open cast mining industry, heavy types of equipment such as bucket wheel excavator, spreader, tripper car, conveyors, etc., are employed in different areas of mining locations. Maintenance activities of this equipment in actual working locations are difficult in terms of predicting the sequence and requirement of different types of spare parts with their quantities. In such cases, the industry faces huge production downtime situations and also incurs an extra cost due to increased frequency of transporting spares from stores to maintenance locations. Therefore, there is a need to learn the sequence of maintenance activities performed on equipment and their approximate usage of spare parts. This process will reduce the downtime activities by knowing the occurrence of the next maintenance activity and the requirement of related spare parts. A data mining technique (sequential pattern mining technique) can be a better approach for extracting the sequences and predicting the number of spare parts to be used and this has motivated us to take up the present work.\n2. Related literature: In the maintenance management literature, several authors have applied data mining techniques for achieving various goals such as predicting maintenance intervals, fault detection, prioritizing equipment, predicting failure probability, identifying the cause of failure, fault trends, etc. In this brief review of the literature, initially, for the ease of the reader, a short discussion is made about itemset mining that is the basis for any data mining activities. In data mining, discovering frequent pattern or itemset is an important process for performing other activities such as association rule mining (ARM), classification, prediction, clustering, etc. (Koksal et al., 2011). The popular approaches such as Apriori-like algorithms (Agrawal et al., 1993) and pattern growth methods are often adopted for finding frequent itemsets from large databases. These approaches assume all items as binary variables considering whether they are consumed or bought or not. Agrawal and Srikant (1995) introduced three different Apriori algorithms like AprioriAll, AprioriSome and DynamicSome. They described a sequential pattern mining problem for finding the frequent sequences of items that should qualify user-specified minimum support. Later, a few researchers proposed an extension of itemset mining called itemset mining with quantities considering the consumption of item along with quantity (Srikant and Agrawal, 1996). Other researchers have studied weighted itemset mining considering the occurrence of the item along with its weight or importance (Yun and Leggett, 2005). Sun and Bai (2008) suggested a weighted mining approach without pre-assigning of weights. Vo et al. (2013) proposed an algorithm based on WIT-trees for fast mining of frequent weighted itemset. Lan et al. (2014) suggested an approach for finding the weighted sequential pattern. Later on, few more researchers proposed some more algorithms for fast mining of weighted itemset (Prost-Boucle et al., 2017; Dang and Skadron, 2017; Nguyen, 2018). The next process of data mining is ARM; it uses the frequent itemsets and discovers the associations among the items/itemsets. The significance and strength of association rule are measured by rule support and confidence values that are greater than the threshold values. Next, a brief discussion is made about the literature related to the application of data mining in maintenance management.\n3. Background of the work: Here, a brief discussion is made about sequential pattern mining and sequential rules for the ease of understanding of the work by the reader. A sequence database consists of sequences of ordered elements or events recorded with or without timestamp information. Let I={I1, I2, I3, ..., Ip} be the set of all items, and sequence s is written as <e1, e2, e3, ..., en> that is the set of all events. In the sequence s, e2 occurs after e1, and e3 occurs after e2 and so on. One event can consist of one item or multiple items in the transaction. Considering the example given in Table I, if x=<a,(a, b, c)> and y=<(b, c)> are two sequences, then x is called a super-sequence of y. A sequence is called a sequential pattern or frequent sequential pattern when computed support value of the sequence is greater than or equal to a threshold value. Referring to Han and Kamber (2006), in a sequence database S that is a set of tuples <SID, s> where SID is the sequence id and s is the sequence. If <SID, s> contains a sequence a, then a is called a sub-sequence of s. The support value of the sequence a is given as follows:\n4. Development of the proposed model: The objective of the proposed model is to integrate sequential maintenance patterns with their frequent group of spare parts. Here, we have described the framework of the proposed model like the collection of maintenance data, sequential patterns generation, frequent spares group generation, integrating sequential maintenance activities with associated spare parts. The framework of the model is given in Figure 1, and Figure 2 describes the time interval sequence of maintenance activities and the corresponding spare parts.\n5. Illustration of the model: an industrial case study: In this study, data were collected from a reputed public sector opencast mining company situated in the southern part of India. The company has implemented an integrated materials management system since 2004 by integrating indenting, supplier sourcing, purchasing and stores (inventory) modules. Hence, all the spare parts consumption information is stored in a central database. The company has three major mining units that use different types of special mining equipment such as bucket wheel excavators (28 no's), belt conveyors (12 no's), Spreaders (16 no's), Tripper cars (9 no's), etc. The company manages nearly about 150,000 of items and among those items, 105,217 are spare parts that are mostly consumed on mining and power generation equipment. In our study, we have considered only the equipment belt conveyor and the proposed model is applied to seven different maintenance activities of the belt conveyor that consumes around 2,643 spare parts in different mining locations. In our study, we collected the spare parts consumption history with respect to maintenance activities for conveyor 1800 in Mine-I. Table VI shows seven maintenance activities and a few related spare parts used during the maintenance activity. We collected the maintenance data and spare parts consumption data from April 2006 to February 2012. We used binary transaction data for spare parts consumption like the number \"1\" indicates that the spare part is consumed and \"0\" indicates that the spare part is not consumed during the maintenance.\n6. Conclusion and further research: In this paper, we have made an attempt to suggest the list of sequential maintenance activities based on historical records of maintenance data. Our sequential mining approach gives a better way of analyzing the sequences as compared to the traditional approach of statistical analysis, particularly when the volume of maintenance data is large. Data mining uses complex SQL (Structured Query Language) queries that are executed in databases. This SQL execution takes very less time to extract useful information from a large volume of data as compared to traditional statistical methods. Usually, in many companies, maintenance managers suggest the maintenance activities on thumb rule basis or on the basis of their experience and physical monitoring of the equipment at regular intervals. The suggested approach tries to eliminate these manual efforts. One can generate a longer sequence that covers more number of maintenance activities based on the threshold value. Moreover, the present work tried to map the frequent group of spare parts in each of the maintenance activity of the generated sequence based on threshold support values set by decision makers. If the managers want to cover more number of spares, they can simply decrease the threshold values. This will help maintenance managers to carry the spare parts directly to maintenance location that will decrease the number of trips from stores to maintenance location and thereby reduce the transportation cost. The proposed method is dynamic in nature for suggesting maintenance activities as it is directly connected to the maintenance database. The generated sequential patterns may change periodically depending upon the actual changes or personalization in maintenance activities.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to suggest a framework for extracting the sequential patterns of maintenance activities and related spare parts information from historical records of maintenance data with pre-defined support or threshold values.\nDesign/methodology/approach: A data mining approach has been adopted for predicting the maintenance activity along with spare parts. It starts with a collection of spare parts and maintenance data, and then the development of sequential patterns followed by formation of frequent spare part groups, and finally, integration of sequential maintenance activities with the associated spare parts.\nFindings: This study suggests a framework for extracting the sequential patterns of maintenance activities from historical records of maintenance data with pre-defined support or threshold values. A rule-based approach is proposed in this paper to predict the occurrence of next maintenance activity along with the information of spare parts consumption for that maintenance activity.\nResearch limitations/implications: Presented model can be extended for analyzing the failure maintenance activities and performing root cause analysis that can give more valuable suggestion to maintenance managers to take corrective actions prior to next occurrence of failures. In addition, the timestamp information can be utilized to prioritize the maintenance activity that is ignored in this study.\nPractical implications: The proposed model has a high potential for industrial applications and is validated through a case study. The study suggests that the model gives a better approach for selecting spare parts based on their similarity or correlation, considering their actual occurrence during maintenance activities. Apart from this, the clustering of spare parts also trains maintenance manager to learn about the dependency among the spares for group stocking and maintaining the parts availability during maintenance activities.\nOriginality/value: This study has used the technique of data mining to find dependent spare parts itemset from the database of the company and developed the model for associated spare parts requirement for subsequent maintenance activity.\n", "role": "assistant" } ]
Good organization development beats bad talent management - it's the system not just the person
[ "Human resources", "Learning and development", "People development", "Career development" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Clearly my title begs the question as to what is good OD and bad talent management. And it needs clarifying why I am concerned about it. I'll take a case study that must be known to all readers - the banking crisis of 2008 and use that to illustrate a few points.\n", "role": "user" }, { "content": "Purpose: The article aims to exposes error in practice in HR and learning and development (L&D), particularly the habit of following the crowd and the lack of company specific practice.\nDesign/methodology/approach: The article draws on a specific case of a highly successful company that did not follow the crowd.\nFindings: The case shows what can be done in practice.\nResearch limitations/implications: Research was based on one case but it exemplifies an important issue.\nPractical implications: The implications for practice are the need for organization-specific approaches to learning and development and the need to integrate individual development with organizational development.\nOriginality/value: The article is new in using a real, unpublished case to expose the errors in HR and L&D strategy and practice.\n", "role": "assistant" } ]
A comparative study of Lean implementation in higher and further education institutions in the UK
[ "Lean", "Higher education", "Further education" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The higher education sector in the UK is changing rapidly. The current economic crisis as well as the change in the way Higher Education Institutions (HEIs) receive funding from their respective Higher Education Funding Councils has created a major shift in emphasis. Through changes in governmental policy relating to university fees, students now hold a major chunk of the income stream that HEIs so badly need. Therefore, in order to remain competitive in the face of greater competition for student places, HEIs are focused on providing more competitive offerings to their students. It is not likely that anyone can predict in what direction HE funding will go over the next few years (Barber et al., 2013) but one thing is certain, HEIs will need to do more with less, develop new teaching and learning strategies, differentiate by being distinct in the products and services it offers, offer a greater value adding proposition to the student and continue to be more \"customer\" focused. Therefore, in order to achieve the above aims, HEIs will need to become Leaner and more efficient in what they do and this will inevitably call on them to adopt improvement strategies such as Lean to assist in their efforts to achieve a more efficient and productive institution that is economically sound.\n2. Research method: The aim of this research programme is to initially identify the key management strategies employed within both FEIs and HEIs in order to determine the effective implementation of Lean in such institutions. Furthermore, the authors are particularly interested in understanding the issues around the organisational dynamics and the acquisition, integration and application of learning and knowledge about Lean implementation in order to sustain Lean implementation projects in HEIs.\n3. Research study - Stage 1: In order to ensure the workshops and groups were of a manageable size, the authors undertook two workshops and two FG studies with the participants and institutions listed in Table II. In Workshop 1 and FG 1, participants made up of FEI 1 and HEI 1 attended. Likewise, in Workshop 2 and FG2, participants from HEI 2 and FEI 2 attended. Selection of which HEI and FEIs to attend the workshops was undertaken randomly.\n4. Research study - Stage 2: The second stage of the study was to identify and compare the key Lean implementation programmes being adopted within FEIs and HEIs. In this study, the participants remained the same as in the first research study and each institution was visited by a member of the research team. The FGs were re-established and the participants interviewed as part of the FG. Semi-structured questionnaires were used and an unstructured interview approach was undertaken to where elaboration on a particular answer was sought. Alongside the interview/questionnaire format, the authors observed the Lean implementation plans and probed the FGs where possible in order to obtain further information about the initiatives being employed and their level of progress.\n5. Discussion and conclusions: Research study 1 outlined that HEIs were more likely to develop holistic strategies to implementing Lean due to them having improved infrastructural capacity (time, support, etc.), research study 2, however, identified that FEIs had greater experience and knowledge of developing and implementing business process improvement programmes than HEIs although a significant level of \"project fatigue\" had set in to the teams and the motivation towards future developments looked less obvious. Given the nature and positioning of FEIs also, the opportunities to differentiate products and tackle international markets is limited therefore it could be argued that product diversity and complexity is lower thus enabling Lean initiatives to be implemented on a more stable product platform.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to investigate and understand the differences that exist between educational institutions in the methods and practices employed in the development and implementation of Lean projects. Whilst many Higher Education Institutions (HEIs) are now starting their journey towards effectively implementing Lean, Further Education Institutions (FEIs) have treaded this well worn path many years previously and so the aim of this paper is to find what key features and issues FEIs have put in place to assist them in implementing Lean projects and whether HEIs can learn from such institutions.\nDesign/methodology/approach: The paper applies two research methods in an attempt to understand the differences between the institutions and hence understand the key features that can be used to better implement Lean initiatives. First, through a series of focus groups, the authors employ a low-level form of Group Consensus Theory in an attempt to understand the organisational dynamics surrounding the adoption of Lean. This is used to understand whether Lean improvement teams within FEIs implement such initiatives with the autonomy and support from senior management. Second, the same group members, detailed interviews were held in order to identify further and more specific information around the tools and techniques employed in the implementation of Lean initiatives in both HEI and FEIs.\nFindings: The study found that although FEIs had much more experience in the design, development and implementation of Lean initiatives, the organisational infrastructure and dynamics towards driving Lean in FEIs was less well embedded in to the culture of the respective institutions than first thought and, that Lean had been developed and driven initially by a consultancy-based approach and around a tool-driven mentality. It was seen that whilst HEIs were generally slower in getting off the mark, there seemed to be more enthusiasm and willingness to drive such initiatives forward and in a more systematic and holistic manner even though some of the projects were in their very early stages of implementation.\nResearch limitations/implications: Whilst this work provides key information on how Lean initiatives are implemented across different institution types, the work has only looked at a very small sample of two teaching focused HEI and two FEIs. The work will need to be extended much more widely to incorporate a larger set of HEIs (both research and teaching focused) in order to provide a more complete map of Lean development in HEIs.\nPractical implications: The aim of the paper is to provide Lean project leaders in HEIs with some additional key insights towards the cultural and organisation dynamics that exist in educational institutions other than HEIs in order to assist them in developing further and more comprehensive Lean programmes.\nOriginality/value: This paper is the first of its kind to study the organisational and cultural dynamics that exist between differing educational institutions in their approaches towards the implementation of Lean and business improvement programmes. The key features highlighted in this work raise important issues regarding the need and importance of developing team dynamics around project implementation.\n", "role": "assistant" } ]
Emotions in leaders' enactment of professional agency
[ "Emotions", "Leaders", "Agency" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: In the field of workplace studies, emotions - including what they signify and how they are important - have gained considerable attention in recent decades (Ashkanasy, 2015; Cascon-Pereira and Hallier, 2012; Petitta and Naughton, 2015; Rausch et al., 2017). Given the ever-changing landscape of organisations, with concomitant challenges to organisational practices, individual employees and - very strikingly - leaders, it is no coincidence that concepts of emotions have emerged strongly. Demands for increased productivity and pressures arising from global competition have led organisations to constantly transform their practices. Increasingly, in seeking maximum profitability, organisations aim to exercise strong strategy-oriented control and demonstrate accountability with a focus on centrally imposed standards (Hansen and Jacobsen, 2016; Maroy, 2009). The recent rapid enactment of neoliberal economic policies and new public management (NPM) practices has led public sector organisations (e.g. in education and health care) to adopt new strategy-oriented managerial models (Halvarsson Lundgvist and Gustavsson, 2018; Hokka and Vahasantanen, 2014). This has exerted pressures to reduce the workforce, increase productivity and create new monitoring systems.\nProfessional agency in the work of leaders: In the field of workplace studies, the concept of professional agency is gaining increasing attention. Professional agency is seen as fundamental for influencing at work and for the learning that takes place through and for work (Billett, 2011; Goller and Paloniemi, 2017; Smith, 2017). In its most positive connotations, professional agency is seen as closely connected to creative work practices (Collin et al., 2017; Sawyer, 2012) or innovative work behaviour (Messmann and Mulder, 2017). Professional agency can also be seen as a collective construction that manifests as shared endeavours to develop and transform work practices or organisational structures (Hokka et al., 2017; Raelin, 2016).\nEmotions in leaders' work: Workplaces were once seen as (ideally) devoid of emotions, demanding an emotionally neutral performance from employees and especially from leaders. This paradigm was based on the assumption that human behaviour in organisational settings follows rational rules and is mainly guided by rational plans and calculations based on cognitive processes (Ashkanasy et al., 2000; Gabriel and Griffiths, 2002). However, recent research has shown that emotions are an extremely powerful force in workplace settings, organisational behaviour and leadership (Ashkanasy and Daus, 2002; Barsade et al., 2003; Thiel et al., 2012). Thus, it is now clear that emotions play an important role in decision-making and in leading processes (Barsade and Gibson, 2007; Loewenstein and Lerner, 2003) .\nResearch task: We sought to elaborate leaders' emotions (regarding their work and identity) in the context of a leadership coaching programme. Thus, we focused on leaders' accounts of their emotions in their enactment of professional agency. This led us to frame the following research task: What kinds of emotions did the middle management leaders express in relation to identity agency, relationship agency and organisation agency at the beginning and end of the identity coaching programme?\nMethods: Participants and research context\nFindings: After completing the first step of our analysis, we were slightly surprised at the intensity and number of emotional expressions in the interviews. The interviews were loaded with expressions of emotions connected to the leaders' work. Below, we present the kinds of emotional expressions articulated before and after the coaching programme. In presenting the findings, we focus on emotional expressions connected with identity agency, relationship agency and organisation agency. The most reported emotional expressions are presented by the total number of mentions for a given category, followed by the number of participants mentioning the emotion in question (separated by a slash [/]).\nDiscussion and conclusions: The findings of this study indicate that the work of leaders working in middle management is infused with contradictory emotions, which are closely related to their enacting of professional agency. Although based on a small-scale qualitative investigation, our findings suggest that in the work of leaders, the role of emotions in enacting professional agency is significant. When leaders were able to influence their work and thus practice professional agency, positive emotions, such as joy, emerged. Weak agency, for its part, seemed to be connected to negative emotions, such as inadequacy and exhaustion. The theoretical and practical implications of our findings are taken up below.\n", "role": "user" }, { "content": "Purpose: Although there has been an increase in workplace studies on professional agency, few of these have examined the role of emotions in the enactment of agency at work. To date, professional agency has been mainly conceptualised as a goal-oriented, rational activity aimed at influencing a current state of affairs. Challenged by this, this study aims to elaborate the nature and quality of emotions and how they might be connected to the enactment of professional agency.\nDesign/methodology/approach: Data are collected in the context of a leadership coaching programme that aimed to promote the leaders' professional agency over the course of a year. The participants (11 middle-management leaders working in university and hospital contexts) were interviewed before and after the programme, and the data were analysed using qualitative content analysis.\nFindings: Findings showed that emotions played an important role in the leaders' enactment of professional agency, as it pertained to their work and to their professional identity. The study suggests that enacting professional agency is by no means a matter of purely rational actions.\nPractical implications: The study suggests that emotional agency can be learned and enhanced through group-based interventions reflecting on and processing one's own professional roles and work.\nOriginality/value: As a theoretical conclusion, the study argues that professional agency should be reconceptualised in such a way as to acknowledge the importance of emotions (one's own and those of one's fellow workers) in practising agency within organisational contexts.\n", "role": "assistant" } ]
Impact of statutory regulation and key internal factors on firm's corporate social responsibility: An emerging economy perspective
[ "Banking industry", "Corporate social responsibility", "Internal factors", "Regulatory impact", "M14" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Corporate social responsibility (CSR) is getting more powerful having a crucial impact on the decision-making of external shareholders regarding a firm. Investors are now more concerned about firm's social behavior through corporate social reporting while making their investment decision (Aguilera et al., 2006). Though a substantial number of firms are found to provide disclosures about their corporate social performances, there is variation in terms of quality (Habek and Wolniak, 2015). Different pressure groups are emerging who demand that firms will be responsible for the contracts they have on social, political and ecological environments (Azim et al., 2009). The level of CSR expenditure and performance varies firm to firm. In a developing country like Bangladesh substantial variation of CSR performance and disclosure is noticeable among firms and it creates a question that which factors are responsible for such variation? In 2011, a notable attempt was taken by the issuance of a statutory regulatory order (SRO) from National Board of Revenue (NBR), Bangladesh where a tax exemption was provided on the CSR expenditure made by the banking firms. No previous research has identified such exploration of regulatory impact on the CSR practices of banking industries in Bangladesh. This has been crucial to investigate because the acceptance and implementation of laws and regulations by firms are key rather just passing out. Khan et al. (2012) mention that there was an SRO issued by the govt. regarding tax rebate on the actual amount spent on CSR activities by the firms in Bangladesh, but did not examine the impact of such regulation on the degree of CSR expenditures and disclosures. This study has made an attempt to contribute through a striking investigation of regulatory impact on the CSR performance and disclosure level.\n2. Review of literature: CSR reporting is subject to different factors that vary in different settings. Imam (2000) points out that CSR practices vary from developed countries to developing countries. However, most of the CSR studies are conducted on the developed country perspectives (Gray et al., 2001; Deegan, 2002; Paul et al., 2006). The study of Khan et al. (2012) has noted that the paucity of CSR literature in the context of emerging economies is one of the important research gaps. Industry variation is another factor that causes a difference in corporate social reporting among different industries (Gray et al., 2001). Esa and Zahari (2015) noted that it is the increased awareness about corporate accountability, which contributes to the rise of interest in corporate social disclosures. Disclosure of non-financial information reduces information asymmetry, which fosters firm performance (Bose et al., 2017). Different internal factors related to corporate governance and financial performances of firms are needed to be examined in the contest of social reporting particularly in developing country perspective. Das et al. (2015) has found a gap in both corporate governance and CSR literature because of the paucity of research works in emerging economies. One important variation between developed and developing countries is that in developed countries the stakeholder's group is very strong to create a kind of pressure on firms to disclose more non-financial information such as social, environmental, sustainability, among others, which is almost absent in developing country perspective. Existing research studies are mainly concentrated on how CSR committees influence CSR disclosures (Kolk and Pinkse, 2010). Some other firm characteristics and governance factors such as firm size, board size, firm network, among others may have instrumental impact on the extent of CSR performance and disclosure. Belal and Owen (2007) have identified stakeholder's pressure as an important issue for corporate social reporting. There are a number of internal factors, which are related to corporate governance, firm characteristics and financial performance. Identifying the factors that drive firms to disclose more social information is the real contribution in CSR literature (Owen, 2005). A significant number of studies have investigated the relationship between corporate size and CSR disclosures where most of the studies have found a significant positive relationship between firm size and CSR disclosure level (Naser and Hassan, 2013). Webb (2004) states that from resource dependence theory perspective independent board members can best protect shareholder's interest and corporate image. The family ownership has been identified as a catalyst for poor CSR disclosure in Bangladesh (Belal and Owen, 2007).\n3. Corporate social responsibility practices in Bangladesh: Bangladesh is one of the rapid economically emerging countries in the world where non-financial reporting like corporate social reporting is also going through an important phase. There is an increasing expectation from different stakeholders that firms should be responsible and accountable for the impacts they have on the society and environment (Azim et al., 2009). After the independence of Bangladesh, there was a lack of regulations regarding non-financial reporting like CSR reporting. A kind of family dominance in the board of the firms is very common in Bangladesh, which resulted in poor voluntary reporting disclosure practice. Choudhury (2008) has noted that many firms have family dominance and informal control mechanism. On the other hand, the Companies Act 1994 (Government of Bangladesh, 1994) does not contain any provision regarding corporate social reporting. Examining the annual reports of 40 listed firms in Dhaka Stock Exchange (DSE) for the period of 1996-1997, Imam (2000) has found very poor level of disclosures related to community, environment and customers, which were not adequate to discharge social responsibilities. Hossain et al. (2006) have also found that on average only 8.33 per cent of the Bangladeshi firms provide social and environmental disclosures in annual reports. One important dimension of corporate social reporting in Bangladesh is that there is a lacking of structured and standalone corporate social reporting. Most of the companies disclose their social and environmental disclosures either in separate segments or in the director's report part. The quality of disclosed information is very low, and most disclosures are descriptive in nature (Belal, 2001). Imam (2000) has also argued that absence of independent audit has raised a big question about the credibility of disclosed information in social reporting. In most of the studies, annual reports have been used as primary sources to analyze the extent of corporate social reporting. Belal and Owen (2007), in an interview-based study conducted on 21 managers in Bangladesh, have revealed that the key motivation behind corporate social reporting in Bangladesh is to satisfy principal stakeholder groups. Another notable dimension of corporate social reporting in Bangladesh is that most disclosures cover philanthropic activities performed by the firms and few qualitative disclosures are found focusing environment and ecology. Belal and Cooper (2011) mention that many companies in Bangladesh stayed away form sensitive social responsibility disclosures issues such as child labor, equal opportunities and poverty alleviation. The pressure from stakeholders group is influential to uphold the extent of CSR activities and disclosures (Islam and Deegan, 2008).\n4. Development of hypothesis: The study is based on several hypotheses, which are linked with the identification of the relationship between firm's internal factors and the level of CSR performance and disclosure. Few variables have been taken into consideration, which is expected to have significant impact on the research objectives and are catalysts to form hypotheses.\n5. Research methodology: 5.1 Sample and data\n6. Empirical results: 6.1 Descriptive statistics of dependent variable\n7. Test of hypothesis: To test the hypotheses that have been determined in this study, multivariate regression analysis has been conducted, which is shown in Table VI. In the regression model, the dependent variable is CSRDI.\n8. Test of statutory regulatory impact on corporate social responsibility in Bangladesh: To examine the impact of SRO related to tax exemption on CSR expenditure, difference in difference (DiD) analysis has been used. The years 2011 and 2012 have been included in the time \"before\" impact and 2013, 2014 and 2015 in the \"after impact.\" This particular regulatory order includes some criteria for the banking firms to be eligible for applicable tax exemption; several firms have been eligible to get such exemption under this SRO. These firms have been included in the \"treated\" group and rest of the firms who are not affected by the SRO have been included in the \"control\" group. Following is the DiD model set for this study:\n9. Conclusion: The study has made an attempt to investigate the relationship and driving impact of a firm's internal factors including some key governance issues on the extent of CSR performance and reporting disclosures in the banking sector of an emerging economic nation, Bangladesh. On the other hand, this study, as a maiden attempt, has investigated the impact of an SRO by NBR, Bangladesh in 2011 relating to tax exemption on CSR expenditure. The DiD analysis shows that banking firms failed to take advantage of such exemption and no significant development in social disclosure happened between 2011-2015. Multivariate regression result ensures that board meeting and board expertize have significant positive impact on corporate social reporting of firm. Such result is consistent with a number of studies (Alotaibi and Hussainy, 2016; Benomran et al., 2015; Darus et al., 2014; Shamil et al., 2014). A negative association between firm size and corporate social reporting has been confirmed by the regression results. This result is consistent with Roberts (1992) where similar negative association between firm size and social disclosure level was found. Interestingly no significant relationship has been found between firm network and CSR disclosures, which outlines that a firm with bigger operating network does not guarantee satisfactory CSR performance and disclosures. Different dynamics of firm internal factors are taken into consideration in this study, which works as important catalysts on determining the degree and quality of CSR performance and reporting particularly in the banking industry. Hence, the study is expected to have crucial implications in the relevant literature, as well as to the regulators and practitioners.\n", "role": "user" }, { "content": "Purpose: This study aims to examine the impact of statutory regulatory order by the government on the degree of corporate social responsibility (CSR) performance and disclosures. It also aims to empirically investigate the relationship of a firm's key internal and governance factors with CSR performance and disclosures.\nDesign/methodology/approach: The study is based on empirical data from all banking firms listed in the Dhaka Stock Exchange (DSE) for a period of 2011-2015. The difference in difference analysis has been used to test the regulatory impact, where content analysis has been performed to find CSR disclosure scores. The multivariate regression analysis has been used to test hypotheses to find impact of firm's internal factor on CSR disclosures.\nFindings: The analysis and results of the study show that there is no significant impact of statutory regulatory impact on a firm's level of CSR performance and disclosure. On the other hand, the study has found that board expertizes and board meetings have significant positive impact on firm's CSR while no significant impact is found for firm networks to influence firm's CSR disclosures.\nResearch limitations/implications: The main research limitation of the study is that it covers all listed firms of the banking industry in Bangladesh. Because of data inconsistency, other industries are not included in the data sample of this study.\nOriginality/value: The value of the study lies in its contribution to the empirical investigation of regulatory impact and key internal and governance factors in a developing country perspective, which will add value to the CSR literature.\n", "role": "assistant" } ]
Effects of benevolence, integrity, and ability on trust-in-supervisor
[ "Ability", "Benevolence", "Integrity", "Trustworthiness", "Trust‐in‐supervisor", "Malaysia" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Humans are social beings who are frequently involved in social exchange relationships. In order for such relationships to work well, people need to trust one another; hence, trust is a fundamental aspect of social life (Igarashi et al., 2008). This is no different in the workplace. Interpersonal trust is a vital component of effective working relationships (Ferris et al., 2009; Gill et al., 2005) and has been found to be related to affective organizational commitment (e.g. Poon et al., 2006), willingness to help co-workers (e.g. Poon, 2006), perception of workgroup cohesion (Thau et al., 2007, Study 1), willingness to share resources (Dirks and Skarlicki, 2009, Study 2), intentions to quit (Brower et al., 2009), and various facets of job performance including task performance, contextual performance, and workplace deviance (e.g. Atkinson, 2007; Colquitt et al., 2007; Dirks and Ferrin, 2002). Thus, in the absence of external control mechanisms, there are various practical benefits to fostering trust in the workplace, especially employees' trust toward their immediate supervisor. Supervisors have considerable influence on employee attitudes and behaviors; therefore, there is a need to understand why some supervisors are trusted and others less so.To address the above issue, I drew on the conceptual framework of Mayer et al. (1995) to test a model of trust-in-supervisor that examined trustee attributes (i.e. benevolence, integrity, and ability) as predictors of trust-in-supervisor. To date, although there is research evidence that benevolence, integrity, and ability have statistically significant, unique effects on trust (see Colquitt et al., 2007 for a meta-analytic review), it is uncertain how these three variables interact to predict trust. Given that Mayer et al. (1995) did suggest that benevolence, integrity, and ability might work interactively to predict trust and the recent call by scholars for research that examines whether or not all three variables are necessary for forming trust (cf. Burke et al., 2007) and how various trustworthiness variables work in conjunction with each other (cf. Dirks and Skarlicki, 2009), my primary objective in this study was to examine whether or not the effects of benevolence, integrity, and ability on trust might also be interactive. A secondary objective was to replicate past studies in examining the additive effects of these three variables on trust-in-supervisor. Also, given that most studies on interpersonal trust within organizations have been conducted in western settings, another secondary contribution of this study is the extension of trust research to a non-western setting. It is hoped that findings from such research will add to the empirical base of research findings on interpersonal trust.\nConceptual background and hypotheses: Additive effects of trustworthiness attributes on trust-in-supervisor\nMethod: Sample and procedure\nResults and discussion: Results of analyses\n", "role": "user" }, { "content": "Purpose: - This study aimed to examine the predictive effects of trustworthiness attributes (i.e. benevolence, integrity, and ability) on trust-in-supervisor.\nDesign/methodology/approach: - A field survey using a structured questionnaire was used to gather data from 107 white-collar employees from diverse organizations in Malaysia. The data were analysed using hierarchical multiple regression analysis.\nFindings: - The results showed that perceptions of supervisor benevolence, integrity, and ability predicted trust-in-supervisor both directly and interactively. Further analysis revealed that integrity and ability interacted in a compensatory manner to predict trust-in-supervisor when benevolence was high but not when it was low.\nResearch limitations/implications: - Study limitations include the use of self-report cross-sectional data. The findings underscore the importance of looking beyond statistical models that test only for main and two-way interaction effects in research examining trustworthiness attributes. Researchers should consider examining three-way interaction effects or run the risk of having a misspecified model. Also, research to determine the relative importance of trustworthiness attributes and the conditions under which one attribute is given more weight than another is needed.\nPractical implications: - Supervisors should be made aware of the importance of treating their subordinates with benevolence. Nevertheless, because benevolence is a necessary but insufficient condition for fostering trust, employers must ensure that their supervisors have high integrity and ability or, at the very least, one of these attributes.\nOriginality/value: - This study highlighted the importance of examining higher order effects in research examining trustworthiness attributes and provides what is perhaps the first empirical test of how benevolence, integrity, and ability interact to predict trust-in-supervisor.\n", "role": "assistant" } ]
Understanding the relationship among resources, social media use and hotel performance: The case of Twitter use by hotels
[ "Hotel performance", "Resource-based view", "Twitter", "Social media use" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Social media refers to a broad range of internet applications, including online review sites (e.g. TripAdvisor), social networking sites (SNSs) (e.g. Twitter) and content sharing platforms (e.g. YouTube) (Kaplan and Haenlein, 2010). In recent years, social media, particularly SNSs, has expanded rapidly in industries. Over 90 per cent of marketers have used such social media platforms for their businesses (Bennett, 2013). According to an industry report, almost 80 per cent of Fortune 500 companies employ Twitter and Facebook (Barnes and Lescault, 2014). A recent Chief Marketing Officers' (CMO) survey shows that companies are increasing their expenditures on SNS such as Twitter and Facebook by investing in technology and infrastructure (Tadena, 2014). Facebook reached 1.65 billion monthly active users (Facebook, 2016), while Twitter claims to have over 300 million monthly active users (Twitter, 2016).\nLiterature review: Theoretical background\nMethodology: Sample and data collection\nResults: Relationship between hotel's resources and social media use\nDiscussion and conclusion: Conclusions\n", "role": "user" }, { "content": "Purpose: The purpose of this study is to understand the use of social networking sites (SNSs) by hotels. Specifically, drawn upon a resource and capability-based perspective, this study addresses two research questions: (1) the relationship between a hotel's resources and its use of Twitter and (2) the relationship between the use of Twitter by hotels and their RevPAR.\nDesign/methodology/approach: The research data include the hotel chain scales, Twitter user profiles and Twitter activities of the hotel parent companies in the USA and the hotels' RevPAR. To more clearly understand the effect of the use of SNSs, the study uses two dimensions: electronic word-of-mouth and customer engagement. The two dimensions of the hotels' Twitter use are calculated based on the data extracted from their Twitter user profiles and historical tweets. For a practical purpose, a social media index (SMI), which combines electronic word-of-mouth and the customer engagement score, was used to determine the overall level of Twitter use by hotels.\nFindings: For RQ1, the results indicate there is a positive association between a hotel's resources and Twitter use. For RQ2, this study shows there is also a positive association between Twitter use by hotels and their RevPAR.\nPractical implications: Twitter use appears to be associated with hotels' resources. In turn, Twitter use is positively associated with hotel RevPAR. Thus, hotels should look at Twitter as a potential strategic tool for business operation and attempt to increase their ability to leverage Twitter (and other SNSs) for organizational goals (e.g. sales, promotion, customer service).\nOriginality/value: To the authors' knowledge, this is the first study empirically investigating the use of SNSs by hotels with the data drawn from actual firm-generated content (e.g. tweets, retweets) and hotels' user profile information from Twitter.\n", "role": "assistant" } ]
Driving organisational change with internal coaching programmes: part two
[ "Coaching", "Organizational change", "Return on investment", "Case studies" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nPart two: Part one of this two-part paper (published in Industrial and Commercial Training, Volume 40 No 1, 2008) focused on organisational change and using coaches as change agents. It compared the use of external versus internal coaches including the areas of cost, complexity, retention, scale and breadth of impact as well as impact on the coaches. It also covered ways of using internal coaching, from overcoming resistance and engaging leaders to where to focus the coaching.Part two of this paper covers training of internal coaches including learning approaches and standards. It covers modelling and design issues, such as scaling internal coaching, coaching culture, using managers as coaches and design around use of external coaches. Implementation of the coaching initiative is also approached with a look at buy in, developing experts and ways to measure the coaching. The paper ends with some common questions and possibilities for future development.\n1. Internal coaching training, design and implementation: 1.1. Training internal coaches\n2. Conclusion: Internal coaching works. It is becoming the initiative of choice in organisations undergoing change, and looking to embed a coaching culture. Of importance is how the internal coaching initiative is designed, packaged and implemented, ensuring alignment with the organisation's vision and longer term strategy.Thanks to measured initiatives, research results show that internal coaching has a significant impact on the coaches, clients, direct reports and the organisation. Factors of note include positive business impacts such as improved retention, engagement, productivity and performance and measurements such as ROI, e.g. ROI x 17 in the recent AIG case (see Figure 4).The field is wide open to further research.\n", "role": "user" }, { "content": "Purpose: - The purpose of this two-part paper is to outline a new way of utilising coaching to drive wide scale organisational change.\nDesign/methodology/approach: - The various benefits to using internal coaches are illustrated by fleshing out the cost benefits, and highlighting a range of ways that organisations can integrate coaching into their people, performance and culture frameworks. The paper covers training, design and implementation issues as well as looking at aligning coaching models with the organisation's strategic approach; draws on extensive case studies and posits some best practice principles, then addresses some of the key questions around this topic. This paper also draws on interviews with over 50 internal coaches over two years, and several other research papers on this topic.\nFindings: - The paper illustrates how training leaders to be internal coaches is a more scalable, sustainable and robust approach to driving change and improving performance than hiring external coaches. Early indicators are showing significant increases in retention, engagement, productivity and performance, as well as ROI (17X), across organisations that have developed internal coaching.\nPractical implications: - Data are critical to understanding coaching impacts within organisational contexts. Thanks to those organisations using measurement, impacts to the business can be determined.\nOriginality/value: - These positive data are significant for organisations making decisions around introducing coaching initiatives, driving organisational change or adopting a coaching culture.\n", "role": "assistant" } ]
Assessing the prudential approach of the Financial Services Commission towards corporate failure in Mauritius
[ "Mauritius", "Corporate failure", "Financial regulations", "Financial services commission", "Regulatory bodies" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: When Mauritius earmarked the Financial Services Sector as being a future auspicious and promising zone for further economic development, the insistent urge to amplify, strengthen and modernize this particular sector reached its peak.\n2. Delineation of corporate failure: The foundation of every business is laid with ample optimism and confidence that the venture will be an unreservedly success story. Besides, for decades now, once a business gets aboard to accomplish its aims and objectives, the \"putative criterion\" by which it is judged is how it has accelerated towards its expansion, how speedily it has taken a firm grip on the share market or how much profit it has yielded since its inception. Such thinking and behaviour are irrefutably coherent as the blossoming of the firm should undeniably be the leading focus. However, it would be utterly irrational to lay such an unrefined emphasis on the growth of the business that completely dismisses \"every thought of failure and most essentially the devastating aftermaths of failure\". Hence even if failure might be the last thing on the mind of those embarking on a business activity, it should be accepted that words such as \"collapsed, failed, bankrupt, broke and bust\" despite being a hard-hitting reality for companies, are inherently part and parcel of any business venture (Argenti, 1967). In general, the disheartening ordeal that companies go through when their business is not living up to their expectations is labelled as corporate failure (Sull, 1999).\n3. Can corporate failure be prevented?: Corporate collapse has never been a matter of merriment. It brings with itself only devastation in the form of rising unemployment, degradation of living standards, degrading economic growth and huge losses plunging the entire community in despair. Therefore, it is ultimately vital to look into the matter. It is more appropriate to enhance the regulations that exist rather than finding solutions once the damage is done. Coming up to the case of Mauritius, where the financial and business services sector has emerged as the fourth pillar of the economy as can be seen below, failure in this sector simply cannot be afforded. In addition, the upshots of the World Economic Crisis can also be termed as a catalyst to review the existing regulatory bodies acting as a shield against corporate failure for this sector so as not to become a prey of the dire consequences of corporate failure. At present, Mauritius has two regulatory bodies specifically \"the Financial Services Commission and the Bank of Mauritius\" present to regulate and supervise the banking and non-banking financial sectors and hence to mitigate the risks of failure to the exception of commercial companies which are not regulated by such bodies so far (Ministry of Economic Development, Financial Services and Corporate Affairs, 2001). The paper focuses on the FSC instead of the Bank of Mauritius since episodes of banking failure have been an extremely rare case in Mauritius. The aim is to assess the proficiency of the FSC as a body that is capable of steering the insurance sector clear of any possible failure.\n4. A critical analysis of the efficiency of regulations of the Financial Services Commission: 4.1 Capital adequacy and solvency margin requirements - in comparison with the PRA\n5. Recommendations: In light of the critical analysis of the approach of the FSC towards taking prudential measures against corporate collapse of the insurance sector under its purview, this part of the paper sets out some plausible recommendations to be considered so as to boost the effectiveness of the latter in fulfilling its precautionary objective.\n6. Conclusion: In this paper, light has been shed on how overlooking the aspect of failure can lead to distressing penalties and drop businesses in despair and hence how it is fundamentally important to lay equivalent focus on precautionary measures against failure together with concentrating on other objectives. It has equally been observed that taking a step towards the prevention of failure requires the utmost contribution and adeptness of regulatory bodies as they are the actual watch dog over the conduct of businesses and therefore how in the context of Mauritius, considering how the Insurance Sector witness failures, this triggers gauging the FSC's current proficiency in this sector under its purview to get an enhanced picture of the reality of things.\n", "role": "user" }, { "content": "Purpose: This paper aims to provide a comprehensive overview of the sphere of corporate failure in Mauritius. The causes are explained and urge to take preventive measures is justified therein. Recommendations are finally proposed to prevent corporate failure in Mauritius.\nDesign/methodology/approach: The methodology used is based on a mixture of the legal research method and case study analysis. This paper analyses every legal instrument such as enactments, binding rules, regulations and guidelines relevant to the Financial Services Commission (FSC) and the insurance sector in Mauritius.\nFindings: The prudential approach by the FSC is on the basis of any attempt for preventing corporate failure in Mauritius. However, there is still room for improvement with amendments that can be brought to various stages, such as the licensing, compliance and regulation stage.\nResearch limitations/implications: In terms of research limitation, this is an area that is quite new in Mauritius, implying that literature would mostly be indirect in nature. However, it has a high implication as it positions itself as one of the first pieces of literature on the issue of corporate failure in Mauritius. It can be the beginning of a long and required series of literature much needed in the field.\nPractical implications: The effectiveness of the regulatory power of the FSC is essential for the financial sector's future of Mauritius. The amendments that are proposed thought this study would help to immediately improve the health of this essential sector.\nSocial implications: It posits the business world as an area in which the social impacts are significant. The social implications would be towards researchers, students, practitioners and policymaker. Also, it is a piece of research that would be important for investors who would want to invest in the financial sectors in Mauritius.\nOriginality/value: This paper will be highly instrumental to policymakers, regulatory authorities, international investors and local businessmen wishing to enter the financial services sector to have a better idea of how this very important pillar of the economy of Mauritius can be shielded better against failure and how it can be enhanced to promote the economic growth of Mauritius.\n", "role": "assistant" } ]
Big data integration with business processes: a literature review
[ "Research methodology", "Network analysis", "Bibliometric analysis", "Big data analytics" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The area of big data has received increasing attention from both the academic and the business communities over the past few decades. It helps to gain business insights, competitive advantage and transforms entire business processes (Wong, 2012; Oh et al., 2012; Mishra, Gunasekaran, Papadopoulos and Childe, 2016). It can \"create significant value for the world economy\", which \"enhances the productivity and competitiveness of companies and the public sector and creates a substantial economic surplus for consumers\" (Manyika et al., 2011, p. 1). Several objectives can be fulfilled by analysing big data, so there is a need for analytical techniques (big data analytics or BDA) to deal with large and complex data sets. Some organizations view BDA as a tool that can help in making strategic decisions, while scholars use it as a basis for verifying existing models and theories (Muhtaroglu et al., 2013). Organizations can empower their customers and improve decisions if they harness the power of BDA effectively (Miller, 2013). By recognizing BDA's potential, organizations improve the efficiency and quality of their business processes through effective business process management (BPM). BPM not only improves processes but also monitors the technological advances that can be integrated in the development of efficient processes through business process re-engineering (BPR) and business process innovation (BPI) (Anand et al., 2013). Thus, the successful integration of BDA and business processes may create a \"new class of economic asset\" and help the top-performing organizations redefine their business and outperform their competitors.\n2. Methodology: This study is a bibliometric and network analysis review, that is, we document all the available studies relevant to a current area or a specific research question (Fahimnia et al., 2015). We determined on this methodology for a number of reasons: to identify the top contributing authors, organizations and countries related to the field; to compare citation and PageRank analysis; and to uncover current research gaps through data clustering. To achieve our research objectives, we took a five-step approach, outlined in detail below: develop a review protocol; identify inclusion and exclusion criteria; explain the search strategy process; study the selection process; and use data extraction and analysis.\n3. Bibliometric analysis: In this section, we provide statistics about the 49 shortlisted articles. Specifically, we studied these articles in terms of their authors, keywords, affiliations and funding agencies. We conducted bibliometric analysis using BibExcel as it is highly flexible and capable of dealing with large volumes of data; it is also compatible with other applications such as Excel, Pajek and Gephi (Persson et al., 2009; Paloviita, 2009). An additional merit of BibExcel is that it generates data for future network analysis, which is not possible with other software like HistCite or Publish or Perish.\n4. Network analysis: The literature records the use of various tools, such as Pajek, VOSviewer, HistCite Graph Maker, and Gephi, to perform network analysis. We chose Gephi for this study as it can handle various data formats and complex data sets and generate flexible, insightful visual aids. In Gephi, the published articles act as nodes and citations as arcs or edges. To generate graphs in Gephi, a .NET file is needed and can be created using BibExcel.\n5. Discussion and conclusions: In this study, we present an overview of the distribution of publications on big data and business processes by conducting a bibliometric and network analysis review of articles written during the period 2006-2016. To extract relevant studies, we searched for papers in the WoS database using predefined keywords. We screened papers by analysing their titles and abstracts and removed those that violated the inclusion criteria. To provide an overview of big data and business process status, we identified a primary set of 49 articles. The results of our study identify the key contributing authors, countries, affiliations and keywords across a broad spectrum of disciplines.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to improve the understanding of the integration of business process management (BPM), business process re-engineering (BPR) and business process innovation (BPI) with big data. It focusses on synthesizing research published in the period 2006-2016 to establish both what the authors know and do not know about this topic, identifying areas for future research.\nDesign/methodology/approach: The research is based on a review of 49 published papers on big data, BPM, BPR and BPI in the top journals in the field 2006-2016.\nFindings: In this paper, the authors have identified the most influential works based on citations and PageRank methods. Through network analysis the authors identify four major clusters that provide potential opportunities for future investigation.\nPractical implications: It is important for practitioners to be aware of the benefits of big data, BPM, BPR and BPI integration. This paper provides valuable insights for practitioners.\nOriginality/value: This paper is based on a comprehensive literature review, which gives big data researchers the opportunity to understand business processes in depth. In addition, highlighting many gaps in the current literature and developing an agenda for future research, will save time and effort for readers looking to research topics within big data and business processes.\n", "role": "assistant" } ]
Literature practices: processes leading up to a citation
[ "Social sciences", "Citation", "Norms", "Academic shortcuts", "Literature practices", "Secondary citation" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The social sciences are currently criticised for their limited success in reproducing the results of studies, which ultimately has led to the coining of the term \"replication crisis\". This topic has sparked a variety of discussions, blog posts and journal publications starting with Ioannidis (2005) reminding the research community about the importance of reproducible findings. Later, the Open Science Framework conducted a large-scale replication attempt of multiple major psychology studies (Open Science Collaboration, 2015), the result of which indicated that a large quantity of the original results could not be replicated.\n2. Method: A multi-study approach was adopted. First, semi-structured interviews were conducted with a small sample of researchers from the local university to pilot questionnaire items prior to the main study. Second, as part of the main study the slightly modified questionnaire items were sent out to a sample of researchers who had recently published a journal article.\n3. Results: Since not all researchers answered all questions or finished the questionnaire, sample sizes vary per question. For the purpose of completeness of information, the total number of valid respondents (n) is reported for each individual analysis. Furthermore, some of the questions had multiple answer possibilities and therefore the percentages do not add up to 100 per cent. Whenever this applies it is mentioned for the respective analysis. Additionally, three questions were coded after the data collection using thematic analysis. For the current study, only those items relevant to our research questions were used (12 out of 21 questions specifically about the recent journal article; 14 out of 21 questions about general behaviour and experiences).\n4. Discussion: This exploratory study investigated possible factors that influence researchers' literature practices in the process of publishing a journal article. It was hypothesised that researchers sometimes rely on shortcuts in the process of citing sources. Furthermore, it was expected that researchers' years of work experience would influence their literature practices and consequently the extent to which they use shortcuts.\n5. Conclusion: To our knowledge, this study is the first that provided an exploration of literature practices and more specifically the process leading up to a citation by asking researchers about their behaviours concerning the selection, reading and citing of sources. Literature practices deserve more attention in the research community, as improper practices likely contribute to citation, quotation and paraphrasing errors. Ultimately, literature practices also constitute a reflection of the state of research at the moment and offer one of many starting points towards finding a solution of the still ongoing replication crisis. Moreover, literature practices are not only relevant to disciplines that are currently affected by the replication crisis, but all scientific disciplines in general. Although literature practices are affected by a multiplex of factors (such as workload, time pressure, journal editors' behaviours and individual citation practices) they require close up investigations to obtain a more accurate picture of how they specifically influence researchers' behaviours. Hopefully, that way we can ultimately ascertain the role of citations as a reliable and valid practice for acknowledging each other's work and constructing and expanding the theoretical frameworks within which we conduct and interpret our work.\n", "role": "user" }, { "content": "Purpose: Literature practices represent the process leading up to the citation of a source, and consist of the selection, reading and citing of sources. The purpose of this paper is to explore possible factors that might influence researchers during this process and discover possible consequences of researchers' citation behaviours.\nDesign/methodology/approach: In this exploratory study, various factors which could influence literature practices were explored via a questionnaire amongst 112 researchers. Participants were first authors of articles published in 2016 in one of five different journals within the disciplines of experimental psychology, educational sciences and social psychology. Academic positions of the participants ranged from PhD student to full professor.\nFindings: Frequencies and percentages showed that researchers seemed to be influenced in their literature practices by various factors, such as editors suggesting articles and motivation to cite. Additionally, a high percentage of researchers reported taking shortcuts when citing articles (e.g. using secondary citations and reading selectively). Logistic regression did not reveal a clear relationship between academic work experience and research practices.\nPractical implications: Seeing that researchers seem to be influenced by a variety of factors in their literature practices, the scientific community might benefit from better citation practices and guidelines in order to provide more structure to the process of literature practices.\nOriginality/value: This paper provides first insights into researchers' literature practices. Possible reasons for problems with citation accuracy and replicating research findings are highlighted. Opportunities for further research on the topic of citation behaviours are presented.\n", "role": "assistant" } ]
Leaders and their use of motivating language
[ "Leadership", "Communication", "Motivating language", "Interviews", "Qualitative data" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nLeadership and language: Leader communication\nMLT: MLT (Sullivan, 1988) provides a model for understanding how leadership language works and evaluating its impact on workers. Using a psycholinguistic framework, Sullivan (1988) conceptualized three types of supervisor-subordinate communications:\nMethodology: Research framework\nResults: The selected comments are presented in terms of their representation of one or more of the three motivating language schema, namely direction-giving, meaning-making, and empathetic language. Where appropriate, each comment is followed by the language type most applicable to that comment, where DG, direction giving; MM, meaning making; and EL, empathetic language. For all comments we provide selected demographic information that shows gender of respondent, level of seniority (CEO, COO), or Executive (Vice President, Director), or Upper Middle (Department Executive, Plant Manager), size of organization (number of employees), and type of organization or industry.\nType of leadership needed today: Management is - day-to-day operations\nDiscussion: This analysis of interview comments identified a range of issues relevant to leaders of private and not-for-profit organizations. MLT was used as a framework for exploring three language styles used by leaders when dealing with these specific issues (Mayfield et al., 1995, 1998; Sullivan, 1988), namely: first, perloctionary or direction-giving, uncertainty reducing language which clarifies the work to be done; second, locutionary or meaning-making language which uses stories to explain the culture, structure, and values of the organization; and third, illocutionary or empathetic language which shares emotions with the workers through praise for a job well done (Mayfield et al., 1998, p. 237).\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to use Motivating Language Theory (MLT) as a framework in determining leader use of different language styles during times of dynamic change.\nDesign/methodology/approach: This exploratory study of executive members of the Australian Institute of Management examined how MLT may provide a mechanism for analyzing what leaders say in the context of a work setting. Written transcripts of interview data were analyzed to determine leaders' use of three language styles: meaning-making (locutionary), direction giving (perlocutionary), and empathetic (illocutionary) language. The software program NVivo was used to help aggregate themes emerging from written and interview data into discrete modules to enable more robust and comprehensive analysis of the data.\nFindings: A range of issues pertinent to the way executives lead and manage their businesses, both in private and not-for-profit organizations, was identified in this study. Direction giving language was most prominent in management activities and leadership that was strategic or people related, followed by meaning-making and empathetic language. The findings suggest that business leaders could develop a repertoire of language approaches in order to achieve organizational outcomes.\nResearch limitations/implications: More research of the language of leadership and its implications for organizational outcomes is warranted. For instance, the strategic integration of motivating language with a compatible set of leader behaviors, organizational objectives, and cultures may reveal mechanisms as yet unknown for achieving outcomes. Research is needed to determine the content and construct of these behaviors, objectives, and cultures. Some questions also remain regarding the appropriateness of using the Motivating Language Scale to examine the types of language used by managers and leaders in the work place. The authors recommend further application of MLT through the Motivating Language Scale in order to help clarify these queries.\nOriginality/value: This paper used MLT as a framework for identifying leader use of different language styles during times of dynamic change. MLT has been used to identify the speech patterns of leaders during verbal communication exchanges with staff and work colleagues, but this study is the first example of the use of MLT when examining leader responses to interviews and in their written responses to survey questions.\n", "role": "assistant" } ]
A longitudinal study on Australian online DVD pricing
[ "Internet shopping", "Retailers", "Pricing policy", "Australia", "Consumer goods" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: The rapid development of online retailing has inspired a fast growing research interest in studying the online pricing evolution (see, e.g. Pan et al., 2004, and Xing et al., 2006 for recent reviews). This study adds to the literature a new research on the pricing behavior and dynamics in the Australian online DVD market, with a data set of 27,030 price observations collected weekly over a year. To our knowledge, this is the first systematic study on the Australian online DVD market from such a perspective.More specifically, this research note contributes to the literature by comparing pricing behaviors of two types of online retailers - pure Internet retailers (hereafter DOTCOMs) and online branches of multi-channel retailers (hereafter OBMCRs), and focusing on the differences in their:* average prices;* price dispersions;* dynamics of prices; and* dynamics of price dispersions.We also address some other important issues such as price convergence, brand loyalty, as well as pricing strategies. In particular, the note attempts to answer the following questions:* Do OBMCRs and DOTCOMs charge the same price?* Do the prices of OBMCRs and DOTCOMs disperse similarly?To the extent that these are not true, in particular that OBMCRs charge higher price than DOTCOMs, and that price dispersion is higher for OBMCRs than for DOTCOMs, we further ask:* Does price differential between OBMCR and DOTCOM persist with time?* Do the price dispersions and the difference of price dispersions persist with time?To the extent that H3 and H4 are not true, we further investigate whether there is any evidence for some kind of price convergence.The online DVD market, one of the most successful markets that have migrated online and enjoyed considerable growth and sales, offers an interesting platform to make these comparative analyses (see Tang and Xing, 2001 for the first analysis). Panel data models are used for \"multiple\" comparisons of prices, while keeping the undesirable factors such as individual title effect, unobserved random effect, and serial correlation under control. This together with the fact that the branded DVDs are physically homogeneous makes data collection tractable and price comparisons meaningful and reliable.\nData description and estimation results: Data description\nConcluding remarks: This study investigates the price movement of the Australian online DVD market using data collected over a one year time span. The purpose of this research is to use our methodology of market price evaluation to the online retailer format directly to study the price behavior and market dynamics in the Australian online DVD market.From our statistical results, we found that the overall market prices decrease over time significantly within our sample period - about 17.4 per cent decrease for all titles, however, with the popular titles prices decreasing faster than those of the random titles.Our results show that DOTCOMs' within titles price dispersions are higher than those of the OBMCRs, but the difference diminishes over time due to the higher increasing rate of OBMCRs' random titles price dispersions. Our results also show that the within retailers price dispersions of OBMCRs are higher than those of the DOTCOMs and the two price dispersions continue to diverge over our sample period. This finding is consistent with the empirical results by Tang and Xing (2001) and Xing and Tang (2004) in which it is found that multi-channel retailers usually had higher average prices and higher standard deviations in prices than DOTCOM retailers, by simpler statistic techniques and using cross-sectional data only in the USA. This paper employs a panel data model which can be used to examine both the evolution of price levels and price dispersions, in another market (Australia). Further, contrary to Lee and Gosain (2002) and Clay et al. (2002), we found that in our case the price dispersions of the random titles category are bigger than those of the popular titles category. One possible explanation for this contradiction is that most of the retailers may now prefer to sell the popular DVDs at similar discount rates from the very beginning of those DVDs in stock to promote the sales or sell the popular ones at the suggested prices set by the distributors, because the severe competitions among retailers and the lower searching cost for consumers result in setting the best possible price the optimum pricing strategy. However, every retailer may have just a few copies of some DVD titles kept in stock in the random titles category. In this case, the retailers may choose to give a big discount for these items in order to increase the operation cash flow.The brand name effect matters in our results, which results in a significant difference in prices, consistent with Brynjolfsson and Smith (2000). Retailers, like HMV, try to use various loyalty programs to make customers less price sensitive and charge higher prices. For example, HMV encourage their customers to join their members' club and will give free gift coupons to those high loyalty members.Our results show that although the online markets are more competitive than offline ones, it is far from perfect competition. From the test results of prices and price dispersions, we know that retailers can be significantly different from each other and remarkable price dispersions can still exist in the online market for a long time.In testing the two types of retailers, we find that the prices of online branches of multi-channel retailers (OBMCRs) are significantly higher than their online-only counterparts (DOTCOMs) and the difference between the standard deviations of prices for OBMCRs and for DOTCOMs becomes bigger over time. However, the patterns of the average prices for OBMCRs and DOTCOMs are quite similar over time. It seems that pricing information can reach every retailer and different types of retailers react to those pricing information and adjust their prices accordingly. But due to the difference in retailer types, the price difference does not shrink over time. Our analysis shows that the two types of retailers have different price levels at the beginning, and this difference is not getting smaller over time. From the analysis of the within retailers price dispersions, we even found that the gap between the price dispersions of OBMCRs and DOTCOMs is widening over time, with the price dispersions of the OBMCRs bigger than those of the DOTCOMs. The reason may be that the pricing of the DOTCOMs is more flexible than that of the OBMCRs, which needs to coordinate the online prices with their offline stores. The DOTCOMs can react to the price changes of their rivals in a very short time and change their prices more quickly than the OBMCRs. Therefore the competition among the DOTCOMs may be more intense. As a result, the price dispersions of the DOTCOMs are smaller than those of the OBMCRs over time.It remains an interesting question to see whether the pricing behaviors will become similar in the future. Our speculation is that the online market price level of the OBMCRs would be still somehow higher than that of the DOTCOMs, since the channel coordination issue facing any multi-channel retailer would add some frictions, and certain market power enjoyed by large DOTCOM retailers seems unavoidable. As long as those factors exist and combine together, different pricing behaviors of different types of online retailers will persist over time. It will be important to continue assessing the online market dynamics in different product categories. Interesting issues such as brand loyalty and marketing strategy with their impact on pricing behavior are also worthy of further studies.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to analyze the online DVD market in Australia, which has not been researched so far, and to examine whether the pricing patterns in this market exhibit any regularity consistent or inconsistent with research findings in other markets.\nDesign/methodology/approach: - A longitudinal data set of 27,030 price observations were collected weekly over one year. Various statistical tests - in particular, a least-squares dummy variable (LSDV) panel data model with serial correlation - were applied to the data.\nFindings: - In general, the online branches of multi-channel retailers have higher average prices and standard deviations than pure internet retailers. Overall, market prices decrease over time significantly within the sample period - about 17.4 per cent decrease for all titles, however, with popular titles' prices decreasing faster than those of random titles. Furthermore, prices do not seem to converge between the two types of online retailers. For individual retailers, brand name effect seems to matter in the results, with a significant difference on prices.\nResearch limitations/implications: - This is an empirical study on a specific market. More data, testing and comparison are needed for any generalization of the findings whether they are consistent in the global sense.\nPractical implications: - Online marketers of different types of retailers may need to reflect on their pricing policies whether their pricing strategies have any space for improvement on profitability.\nOriginality/value: - This study investigates the price movement of the Australian online DVD market using data collected over a one year time span.\n", "role": "assistant" } ]
Does collectivism necessarily negate the spirit of entrepreneurship?
[ "Gender", "United Arab Emirates", "Collectivism", "Individualism", "Business students", "Entrepreneurial Potential" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The concepts of individualism and collectivism, as personality traits, have been very popular in the literature dealing with social and personal orientations (Spence, 1985; Wagner and Moch, 1986; Sampson, 1988; Dansereau, 1989; Wheeler et al., 1989; Schwartz, 1990; Morris et al., 1993; Moriano et al., 2012). There is no doubt that these personality traits play an important role in influencing entrepreneurial behaviors, including the potentials and intentions to become entrepreneurs (Zha et al., 2006; Johnston et al., 2009; Zahariah et al., 2010; Nguyen et al., 2010; Moriano et al., 2012). Linking these personality traits to entrepreneurial potential (EP) is imperative for theoretical and empirical reasons, because entrepreneurs with different psychological traits may have a tendency to exhibit different degrees of EP and behaviors.\nEP: Previous research used different approaches and terminologies when assessing entrepreneurial behaviors which include \"entrepreneurial orientation\" (Miles and Arnold, 1991; Entrialgo et al., 2000b; Grubuz and Aykol, 2009), \"entrepreneurial intention\" (Krueger and Carsud, 1993; Krueger et al., 2000; Douglas and Shepherd, 2002; Bjandari, 2006; Van Gelderen et al., 2007), and \"EP\" (Hull et al., 1980; Lachman, 1980; Mitchell, 1989; Krueger and Carsud, 1993; Krueger and Brazeal, 1994; Walstad and Kourilsky, 1998; Thompson, 2004; Sahay and Rai, 2004; Veciana et al., 2005; Baughin et al., 2006; Gerry et al., 2008; Guerrero et al., 2008). Although these terms bear some similarity, they differ slightly in the following respect. The term \"entrepreneurial orientation\" refers to the \"processes, structures, and/or behaviors that can be described as aggressive, innovative, proactive, risk taking, or autonomy seeking\" (Lumpkin and Dess, 1996, p. 147). The term \"entrepreneurial intention\" focusses on \"intentions to start-up a business\" (see e.g. Veciana et al., 2005; Gerry et al., 2008; Guerrero et al., 2008; Turker and Selcuk, 2009).\nIndividualism, collectivism, and entrepreneurship: Personality traits are assumed to play an important role in influencing people to become an entrepreneur (de Pillis and Reardon, 2007; Zahariah et al., 2010) although some research questions this relationship (Johnston et al., 2009). While most researchers would agree that current and future entrepreneurs have profiles and traits that enhance their success, there is debate as to which are the most distinguishing traits leading to people becoming entrepreneurs and the success of their activities. Linking the psychological traits to EP is imperative for theoretical and empirical reasons, because entrepreneurs with certain psychological traits may exhibit different degree of EP (Entrialgo et al., 2000a, b). For example, using a questionnaire survey on undergraduate business students in a Malaysian public university, Zahariah et al. (2010) found that personality traits play an important role in influencing the students' decision to become entrepreneurs. Other recent studies found that personality traits affected the behavior and performance of entrepreneurs. However, a great number of these studies have focussed more on the traits of actual entrepreneurs rather than individuals with the potential and intention to become entrepreneurs.\nTheoretical foundations and hypotheses: The study of future entrepreneurs initially made some general assumptions about the psychological characteristics of \"entrepreneurs\" (Sexton, 1980; Hull et al., 1980; McClelland, 1987). However, researchers have realized that over the years, more and more of these personal characteristics have been discarded (Thompson, 2004; Okhomina, 2010; Zhao et al., 2010). Identifying personal traits that can influence entrepreneurial behavior can help us cultivate entrepreneurs.\nMethodology: Survey data were collected from a sample of business students attending courses in a University in the UAE. The questionnaire instrument was piloted randomly prior to final delivery and the scales and associated items were adapted from previous validated research (see measures section below). The questionnaire was in English as the target population of students language of instruction was English.\nAnalysis and results: Table III shows that the mean score on collectivism, for the total sample is much higher than that of individualism. This prevalence was also confirmed through a one sample t-test revealing that the differences in mean scores (11.63 for collectivism against 8.11 for individualism) were very significant at <\"0.001.\" This is consistent with H1 and further supports previous research on the prevalence of collectivism in the Middle East.\nConclusions and implications: A vital question that deserves more research attention is the implications of culturally based personal values for the successful creation and growth of entrepreneurial ventures.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to explore the relationships between individualism/collectivism (as personal traits) and individuals' potential to become an entrepreneur.\nDesign/methodology/approach: The study draws on a sample of 503 students enrolled in business courses at a university in the United Arab Emirates (UAE). It focusses on the concept of Entrepreneurial Potential (EP) as a measure of \"desirability and inclination\" to start a business. The paper tests the hypothesis that the concepts of individualism and collectivism are not necessarily polar ends of the same continuum and examine their impacts on EP, controlling for age and gender as main demographic characteristics. Four main hypotheses are explored.\nFindings: Statistical analysis confirms the two main hypotheses. They reveal that: first, concomitant with the dominant collectivist values at national level, potential future entrepreneurs evolving in the Middle East/Gulf region endorse personal traits of a predominantly collectivist nature; second, contrary to popular assumptions individualism does not have a strong impact on youth EP. In the context of this study, the impact of collectivism on EP is most significant. The paper also found that gender had no significant impact on EP\nResearch limitations/implications: The use of personality traits alone, as a basis for understanding predictors of EP may not be sufficient. A number of contextual variables (such the socio-cultural and economic context) may also have a strong influence. Unfortunately, it is not possible to test for these effects with the data available in this study. Future research may consider these.\nPractical implications: Despite its limited scope (limited sample-size and target population), the findings of this study are useful to both practitioners and policy makers. Management practitioners interested in entrepreneurial behaviors need to take stock of the fact that future entrepreneurs can (and perhaps should be able to) blend their competitive entrepreneurial drive with the spirit of collectivism. This is particularly relevant in selection processes using personality tests for the purpose of extracting the most likely candidates for entrepreneurial ventures involving youth.\nOriginality/value: The findings of this study do not support the general assumption that individualism and entrepreneurship ties necessarily go hand in hand. They clearly indicate that collectivism has more explanatory power in this regard, though this may be contextual. These findings may be explained by the context of the study (UAE/Middle East). The overwhelming majority of the respondents are from the Middle East and gulf region, where collectivist aspirations are predominant. Yet, the economies of countries in those regions are fuelled by an increasing number of advanced and quite daring entrepreneurial projects, as exemplified by the modernist business ventures in Dubai, Qatar, and Abu Dhabi.\n", "role": "assistant" } ]
IJPDLM's 40th anniversary: an overview and retrospective analysis
[ "Serials", "Distribution management", "Research" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: The International Journal of Physical Distribution & Logistics Management (IJPDLM) reached another milestone this past year, attaining the ripe old age of 40, making it one of the oldest journals in the logistics and supply chain management (SCM) field. As we will discuss shortly, it is also one of the most highly respected journals in the field as it consistently ranks near the top in surveys of logistics educators and practitioners.We consider it a great honor and privilege to have served as co-editors these past five years. We want to take this opportunity to thank the authors, Editorial Advisory Board members, regional editors, special issue editors, book editor, manuscript reviewers, and staff at Emerald (particularly Lucy Sootheran, the most supportive publisher an editor could have) whose contributions both enhanced the quality of the journal and made our job much easier. We truly appreciate each and every one of you.Also, in this paper, we want to acknowledge the most frequently downloaded articles and those authors who have published most frequently in the journal during its 40-year history. Additionally, we will provide a brief overview of some key characteristics of the journal during our five years at the helm. First, however, we would like to present some external evidence of the journal's stature in the field.\nThe journal's reputation: A review of the more recently published articles that rank logistics journals clearly reveals the high esteem in which both academics and industry practitioners hold IJPDLM. It consistently ranks very high in surveys of these groups.Menachof et al. (2009) developed a composite usefulness index based on research, teaching and outreach usefulness to assess the quality of 33 SCM periodicals, 22 of which were academic journals, and 11 were practitioner publications. Their sample of 143 respondents was comprised mostly of US and European faculty members, though Australia and Asia were also represented. IJPDLM ranked second of the 22 academic journals and third overall. IJPDLM was ranked second on the basis of research usefulness.Zsidisin et al. (2007) conducted a study of purchasing and supply management journals, generating rankings for 24 operations, logistics, and general management academic journals from a survey of 111 faculty members, about 90 percent of whom were at US institutions. IJPDLM ranked eighth among this more broadly focused set of 24 academic journals.Rutner and Fawcett (2005) surveyed 41 faculty members and 113 practitioners to assess the quality of nine academic journals and one practitioner journal. IJPDLM ranked second among the academic journals and third overall.Gibson and Hanna (2003) applied a usefulness index similar to that employed by Menachof et al. in a survey of 102 US faculty members. They established scores for 22 SCM periodicals, 11 academic, and 11 practitioner. IJPDLM ranked fourth among the 11 academic journals and seventh overall. Similar to the Menachof et al.'s study, IJPDLM was ranked second on the basis of research usefulness.Finally, in a different type of study, Autry and Griffis (2005) developed a social anthropology of logistics research based on articles published in the four leading logistics journals. They used the published rankings of logistics journals between 1995 and 2004 to determine the four leading journals. IJPDLM was among this group of elite journals they used in their study.Each of the studies summarized above was published in a different journal, the profiles of their samples were different, and the journals they included in their analyses varied. Yet, IJPDLM was ranked very highly in each.Another popular indicator of a journal's reputation is the number of times its articles are cited. IJPDLM has been accepted for coverage in Thomson Reuters Social Sciences Citation Index (formerly ISI) and will receive its first journal impact factor (a measure of the frequency with which the \"average article\" in the journal has been cited) in 2011. When, Kumar and Kwon (2004) also investigated the number of citations for 19 logistics and transportation journals and utilized three different weighting approaches to rank the journals, IJPDLM was ranked second regardless of which weighting approach was used.Additionally, the number of article downloadings may serve as a good proxy measure for citations to illustrate the impact of IJPDLM. A downloading occurs when someone opens the PDF or HTML of an article.Emerald began tracking IJPDLM downloadings in 2005. Since then, as of September 15, 2010, there have been 1,478,700 article downloadings. Thus, over this less than six-year time period, IJPDLM has averaged approximately 250,000 downloads per year. The most frequently downloaded articles are identified in the next section.\nTop articles and authors during the 40-year history: There have been nearly 1,400 articles published in IJPDLM over the last 40 years and a little more than 1,500 different authors. The 25 most frequently downloaded articles since 2005 appear in Table I. A review of the topics covered by these articles provides some indication of the primary interests of logistics academics and professionals over the last five years and perhaps emerging areas of research for the future.It is particularly worth noting that six articles have each been downloaded more than 10,000 times. Two of these articles directly address supply chain risk management and a third (on designing agile supply chains) has implications for risk management. Two articles identify and discuss methodologies utilized in logistics research and one article pertains to sustainable SCM. This latter article was published in mid-2008 and has thus been downloaded nearly 13,500 times in about two years.As no journal could exist without authors willing to submit their work, we would like to recognize those authors who have published most frequently in the journal. The 25 most frequently published authors are listed in Table II. The number of articles for each author includes both single- and co-authored articles.\nThe 2006-2010 period: As we reflected upon the last five years of our editorship and what information might be of interest to the journal's constituents, we decided to focus on two key characteristics of IJPDLM: the international diversity of its author base and the diversity of logistics subject matter it includes. Svensson et al. (2008) investigated the geographical affiliation of authors in three leading logistics journals over the 2000-2007 timeframe and found IJPDLM to have greater diversity than the other two journals. We were interested in extending his findings to encompass all five years of our editorship.We also thought it a good idea to differentiate between \"special\" issues and \"regular\" issues of the journal. As our loyal readers know, the journal typically has three or four special issues from a total of ten issues published per year. Some of our special issues are the result of relationships the journal has established with leading logistics research organizations, and they represent the best papers presented at their annual conferences. These papers undergo a double-blind review process and are held to similar standards as papers submitted and managed through the regular review process. Other special issues result from calls for papers focused on a specific topic approved by the co-editors.Special issues\nConclusion: The IJPDLM has a proud history as one of the earliest and most well-known and respected journals in the logistics and SCM area. This paper was written by the co-editors as part of a special issue to celebrate the 40th anniversary of the journal. More specifically, the purpose of the paper was to take a retrospective look at select characteristics of the journal over a 40-year period with special emphasis on the past five years. Some key findings of this retrospective analysis are summarized below.The analysis indicates that IJPDLM retains its stature as one of the most prominent academic journals in the logistics discipline, according to more recently published studies. Likewise, a 40th anniversary overview of the journal yielded summaries as to the 25 most frequently downloaded articles as well as the 25 most frequently published authors. A significant finding was that four of the six most frequently downloaded articles related to the topics of risk management and sustainability.An analysis of the 2006-2010 period of our editorship revealed both widespread author and topic diversity. This was true for the 30 regular as well as the 20 special issues. The author diversity alone suggests that IJPDLM is truly an international journal with Europe, North America, Asia, and Australia being represented. Analysis of topic diversity with respect to both the regular and the special issues revealed evidence of an emerging research area pertaining to significant societal topics relating to logistics and SCM. Furthermore, there was evidence of a general research interest in addressing challenges and issues of a more external nature or scope.As we planned for this 40th anniversary issue of IJPDLM, we wanted to offer a futuristic orientation as well as a retrospective analysis. Thus, this special issue features four articles by prominent researchers with expertise on topics selected by the co-editors as having special appeal and relevance for the journal's readership. The four topics selected for inclusion in the anniversary issue strongly reflect the external and societal orientations that were identified in the retrospective analysis.The four articles and their authors are:1. Patricia J. Daugherty - \"Review of logistics and supply chain relationship literature and suggested research agenda\".2. Gyongyi Kovacs and Karen M. Spens - \"Trends and developments in humanitarian logistics: a gap analysis\".3. Craig R. Carter and P. Liane Easton - \"Sustainable supply chain management: evolution and future directions\".4. Martin Christopher and Matthias Holweg - \" 'Supply chain 2.0': managing supply chains in the era of turbulence\".We sincerely hope that you will enjoy the articles comprising this special issue as much as we have enjoyed the privilege of planning and organizing it. We close by offering congratulations to IJPDLM on its 40th anniversary and best wishes for continued prominence and success. We know it is in excellent hands with the new editorial team.\n", "role": "user" }, { "content": "Purpose: - The purpose is threefold: to assess International Journal of Physical Distribution & Logistics Management's (IJPDLM's) reputation for quality and impact; to identify leading articles and authors during the journal's 40-year history; and to report on the international diversity of the journal's author base and the diversity of its subject matter over the last five years.\nDesign/methodology/approach: - The paper uses the following: literature review of recent journal articles that assessed the quality of logistics and supply chain management (SCM) journals; IJPDLM article download counts and article counts per author over last 40 years; and assessment of subject matter content and geographical base of authors for articles published in IJPDLM over the last five years.\nFindings: - IJPDLM consistently ranks among the top logistics and supply chain journals on the basis of research quality and usefulness. IJPDLM is quite diverse both with respect to logistics subject matter and to the location of its authors. The most popular topics over the last five years are: purchasing and supply management; inter-organizational relationships; customer service and demand management; and logistics outsourcing/3PL. A key emerging research area for logistics and SCM is the discipline's contributions to addressing important societal issues.\nPractical implications: - The findings pertaining to current and emerging research areas will be of interest and value to all logistics and SCM researchers.\nOriginality/value: - The analysis of IJPDLM's reputation and the assessment of the subject matter it covers are both original and of interest to prospective authors.\n", "role": "assistant" } ]
Windmills in cyberspace
[ "Cyberspace", "Twitter", "Democracy", "Algorithms", "Politics", "Elections", "Automated opinion", "Open societies", "Political bots", "Propaganda" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: \"What giants?\" asked Sancho Panza. \"The ones you can see over there\", answered his master, \"with the huge arms, some of which are very nearly two leagues long\". \"Now look, your grace\", said Sancho, \"what you see over there aren't giants, but windmills, and what seems to be arms are just their sails, that go around in the wind and turn the millstone\". \"Obviously\", replied Don Quixote, \"you don't know much about adventures\".\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to stimulate an urgent dialogue about the impact of automated opinion engines (\"bots\") on the functioning of public institutions in democratic societies. While the use of political bots may or may not have influenced the recent US presidential election or the UK \"Brexit\" referendum, it is believed that the implications of the use of political bots are more broadly troubling. There is an urgent need for common standards to prevent the abuse of these powerful digital tools.\nDesign/methodology/approach: The paper is based on a review of recent pieces describing political bots and attempts to extrapolate our learnings from recent political campaigns to the broader context of the discussion of all public issues.\nFindings: It was found that the use of political bots has a powerful ability to manipulate public opinion and could easily infect the totality of public discourse.\nResearch limitations/implications: The core data on which the author's discussion is based are limited to primary research by a small number of data scientists. This pool needs to be significantly expanded.\nPractical implications: The insights the author proposes should serve to stimulate an organized effort to develop common standards for the use of and to prevent the abuse of these automated opinion tools.\nSocial implications: Unless an effort along these lines is made, distrust in all democratic and transparent institutions is highly likely to decrease.\nOriginality/value: While much has been written about bots in politics, the author believes that this is the first attempt to trace the dangers of bots across a much broader set of community institutions.\n", "role": "assistant" } ]
Key determinants of SMEs' export performance: a resource-based view and contingency theory approach using potential mediators
[ "Export performance", "Vietnam", "SMEs", "Structural equation modelling", "Resource-based view", "Contingency theory" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Market globalization, rapid technological improvements, government support and recent changes to the global economy, such as market and trade liberalization initiatives, have all played a positive role in increasing the internationalization of small and medium enterprises (SMEs) through exporting their products or services (Andersson et al., 2004; Fillis, 2007; Krammer et al., 2018). The significant contribution of SMEs to job creation, innovation and economic restoration is well known (Westhead et al., 2004). Efforts to improve SMEs' Export Performance (EP) have thus become prominent in the area of export-related research.\nExport performance perspective and hypotheses: Research on the key determinants of SMEs export has attracted a growing amount of interest in the literature over the past few decades because of the importance of SMEs in terms of exports and job creation (Sousa et al., 2008; Zou and Stan, 1998; Rua et al., 2018; Dhanaraj and Beamish, 2003; Krammer et al., 2018; Anil and Shoham, 2017; Manzanares, 2019). These studies contributed to the literature in terms of finding many relevant determinants of EP and how to measure them. According to Beleska-Spasova (2014), achieving effective EP is at the core of the tactical legislative procedure for both businesses and other entities. For corporations, successful EP shows the extent of a firm's goals, both financial and non-financial. Such goals are achieved in the global setting within an agreed period, and they can be evaluated by the appropriateness of the selected export approach, given the particular internal and external factors (Beleska-Spasova, 2014).\nDevelopment of hypotheses: This paper proposes a number of EP factors based on the literature review above and on feedback from SMEs gathered via focus group meetings with over 60 Vietnamese SMEs. The choice of the sample SMEs was not limited to any specific business type but the sample covers industries operating in the three selected regions (i.e. North, Central and South of Vietnam). Figure 1 presents the conceptual model with the relevant constructs. We specifically look at the issues faced by SMEs in Vietnam in their efforts to internationalize their activities. In the first step of developing this model, as indicated earlier, we looked at the contingency theory and the RBV of a firm (Dhanaraj and Beamish, 2003; Theingi and Purchase, 2004; Estrin et al., 2008; Krammer et al., 2018). The contingency theory and RBV approaches are effective and applicable for understanding the concept of EP and SMEs' export barriers (Beleska-Spasova, 2014). This study focuses on developing an integrative model that takes all aspects of export drivers into account, and provides and examines a more comprehensive framework for examining SMEs' export drivers. Hence, this study focuses on several important factors of EP and explores an important part of the literature that has been overlooked, namely, the potential mediating factors of EP. Consequently, this research develops nine hypotheses and examines them empirically by using data from the Vietnamese service sectors.\nMethods and data: Empirically, this research develops nine hypotheses and examines them by using data from various Vietnamese sectors (manufacturing, services and construction) provided by 364 SME exporters in three regions across Vietnam (i.e. North, Central and South). A random sample of various Vietnamese SMEs from all industries and locations across North, South and Central Vietnam was selected. In addition to detailed data about their enterprises and the demographics of the owners, information about employment, net annual sales and the EP of these SMEs was collected as well. The survey comprised 17 main questionnaires (with over 100 technical questions) to address and examine all determinants of EP.\nSample characteristics: As indicated in Table III, the largest proportion of the SMEs exporters are in the South region of Vietnam, and they mainly operate in the manufacturing and service industries,134 firms (59.8 per cent) and 83 firms (37.1 per cent), respectively. In the North, the share of manufacturing activities is much greater than in other regions for both exporters and non-exporters.\nResults and discussion: In this study, we first examined the fitness of the questions (the observed variables) and constructs (the latent variables) via the SEM method. As the results show (Table II), except for four questions (or observed variables) related to the self-efficacy construct (a latent variable), the questions show strong association with the constructs. The four non-significant observed variables were removed from the final model.\nConclusions and further implications for practice: This research has developed a model for examining the EP of SME exporters in an emerging market, Vietnam by focusing on the direct and indirect effects of internal and external factors on EP. The study highlights the importance of mediators to help SMEs grow and compete internationally in a context where there is a need for more empirical and theoretical research. The potential mediators that have been tested include innovation strategy, export marketing strategy and business strategy. Our model is based on contingency theory and the RBV approach. Even though recent literature in the area of EP has focused on firms from emerging countries, studies in this area are still limited.\n", "role": "user" }, { "content": "Purpose: Various barriers discourage small and medium-sized enterprises (SMEs) from entering or expanding their export activities in the international markets, especially SMEs in emerging markets. The purpose of this study is to look at capacity building to accelerate SMEs' export performance.\nDesign/methodology/approach: This study draws on contingency theory and takes a resource-based and market-based view to provide a holistic understanding of the issue. This study uses primary data collected via extensive surveys from active SMEs in three main industrial regions in Vietnam to undertake confirmatory factor analysis and structural equation modeling for quantitative analysis.\nFindings: The results confirm and show the significant effects of various determinants on firms' export performance. These research findings have scientific contribution and significant implications by understanding the effective internal and external export drivers and mediators in an emerging market and enhancing SMEs' export performance.\nPractical implications: This study helps SMEs to improve their export performance by systemizing their decision-making in export activities, improving main export drivers highlighted in this study and developing required training programs for their teams. The outcomes also helps policymakers and regulators to improve the current SME ecosystem in Vietnam through training programs, improving policies, facilitating trades, providing more government assistance etc. The results of this study can be extended to other emerging markets with a similar economic structure and legal system.\nOriginality/value: Given the need for more work on export performance, this paper develops and tests a holistic conceptual framework that accounts for all aspects of export drivers, and provides a more comprehensive model for examining SMEs' export drivers. This theoretical framework also incorporates three potential mediators (i.e. innovation strategy, export marketing strategy and business strategy) to investigate the effect of internal and external factors on export performance, highlighting the importance of the mediating effects on SMEs in achieving growth and competing in the international arena.\n", "role": "assistant" } ]
Prerequisites for performance measurement supporting purchaser-supplier collaboration
[ "Performance measurement", "Supply chain management", "Value co-creation", "Antecedent", "Purchasing", "Purchaser-supplier relationship" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Benchmarking includes systematic identification of best practices, innovative ideas and effective operating procedures (Bogan and Callahan, 2001). It supports performance improvements through learning between companies and organizational units. Supply chain benchmarking can include the comparison of operations between suppliers aiming at best possible transactional value (e.g. fit for existing production process). It can also mean inter-organizational product/service development leading to new business opportunities. The latter approach reflects the relationship value (Lindgreen et al., 2012). Less attention has been paid on benchmarking supporting supply chain collaboration at the inter-organizational level (Choy et al., 2007).\n2. Literature on performance measurement supporting purchaser-supplier collaboration: 2.1 Overview of the literature on supplier performance measurement\n3. Methodology: A qualitative study for exploring the applicability and manifestation of constructs presented in the literature was carried out (cf. Patton, 1989). The data collection method in this study was interviewing. Secondary empirical method included observations during a development project of collaborative performance measurement in a single case setting. Selection of studied companies followed the idea of maximum variation (Flick, 2002) in order to justify the findings in several contextual settings. The four companies studied are large (annual revenue over one billion euros; over 10,000 employees in 2014) Finnish companies operating internationally and mainly in business-to-business markets. They represent the views of the purchaser. Two of the companies operate in service industries (ServA and ServB) and the other two (ManufA and ManufB) in manufacturing industries. In addition, two of the companies (ServA and ManufB) have continuous process-like production, homogenous products and large production volumes. By contrast, two companies (ServB and ManufA) have a more project-oriented production with at least a moderate level of tailoring of services and products for each customer. A summary of the companies studied and the interviewees appears in Table II.\n4. Empirical results: 4.1 Usage of performance measurement in purchaser-supplier collaboration\n5. Discussion: Many earlier studies have highlighted similar purposes for performance measurement in PSM as in any managerial functions: objectives setting, determining future actions (Gunasekaran et al., 2004), strategic alignment and financial contribution (Pohl and Forstl, 2011), identification of deviations from standards (Cousins et al., 2008) and benchmarking (Giannakis, 2007). This study intended to shed light on a less studied aspect: how performance measurement supports collaboration with suppliers. The key result regarding the first research question is that the companies apply supplier performance measurement widely but mostly in formal control mode. This means that deviations identified in product quality and delivery performance are communicated to suppliers when they occur, but in a non-systematic way. Communication appears one-directional from purchaser to supplier and even the one-directional communication is limited. This can pose challenges since suppliers easily become demotivated if a purchaser company does not communicate performance evaluation results and link them to other similar suppliers (Sundtoft and Ellegaard, 2011). In addition, this does not embody the idea of collaborative performance measurement where bilateral communication should be facilitated (Noshad and Awasthi, 2015; Prahinski and Benton, 2004). This observation also reflects the lack of communicational structures alongside with performance measurement which has been found to hinder the integration of supply chain (Forslund and Jonsson, 2009).\n6. Conclusions: This study contributes to the literature of performance measurement in PSM by extending the empirical knowledge on collaborative performance measurement and performance measurement in extended enterprises and networks. The objective of this paper was not to provide a framework for collaborative performance measurement but to present and evaluate prerequisites for measurement supporting collaboration. The empirical examination demonstrates that prevailing performance measurement practices represent a more transactional than relationship-oriented approach to collaboration between purchaser and supplier companies. Communication of measurement information between purchaser and supplier remains limited, and measurement mostly follows the logic where the purchaser company monitors its suppliers in a very similar manner (i.e. identification of deviations from agreed specifications/standards) to what it does in its own operations (cf. extended process measurement). Use of performance measurement in bilateral communication is mostly lacking.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to clarify the prerequisites for performance measurement supporting purchaser-supplier relationships and value co-creation. It also explains the causes for the limited use of collaborative measurement.\nDesign/methodology/approach: Four case companies representing different contextual settings are studied. The primary source of empirical material is an interview study addressed to 24 interviewees. The empirical data are analyzed according to the constructs created as a result of the literature review.\nFindings: The results reveal that prevailing performance measurement practices represent a more transactional than relationship-oriented approach to purchaser-supplier collaboration. The technical prerequisites for collaborative performance measurement are mostly not fulfilled, inhibiting the use of performance measurement in a collaborative manner. It is proposed that the differentiation between project and process production types has implications on the importance of collaborative performance measurement.\nResearch limitations/implications: The paper illustrates the desirable characteristics of performance measurement supporting collaboration. It also presents an application of collaborative performance measurement in a single case context. The research reveals the need to develop non-financial performance measures further in order to facilitate the more proactive use of performance measurement supporting true value co-creation between purchaser and supplier companies.\nOriginality/value: The empirical research on the topic of performance measurement in purchasing and supply management (PSM) is often limited to intra-organizational measurement and highlights transactional approach to collaboration between parties, although PSM research has otherwise acknowledged the importance of value creation and relationships between organizations.\n", "role": "assistant" } ]
From power to punishment: consumer empowerment and online complaining behaviors
[ "Empowerment", "Service failure", "Punishment", "Revenge", "Online complaining" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: In September 2011, Bank of America (BOA) proposed a $5 monthly fee for its debit card users. Resentment arose among BOA's customers and among them was a 22-year-old college graduate Molly Katchpole. Katchpole's anger led her to launch a petition on Change.org to repeal the $5 fee. The response was overwhelming. In less than a month, more than 300,000 people from all 50 states joined the petition. More than 21,000 people pledged to close their BOA checking accounts. Negative comments poured into BOA's Facebook and Twitter accounts. As complaints mounted from bank customers and the online pretest gained national media attention, BOA announced it would drop its banking fee in less than one month of announcing the proposal. Within weeks, other major financial institutions such as Citi, Wells Fargo and JPMorgan Chase announced dropping their debit card fee plans due to public pressure.\nLiterature review: Psychological empowerment\nMethod: To test the study hypotheses and research question, the study conducted an online experiment with a 3 (intrapersonal empowerment: high/low/control) by 3 (interactional empowerment: high/low/control) between-subject design. The experiment instrument was developed via Qualtrics and a link was distributed through Amazon Mechanical Turk (MTurk). MTurk is an online panel run by Amazon.com, and it is widely used for behavioral research and survey studies (Goodman et al., 2013; Mason and Suri, 2012). For sample selection, a qualification standard was employed by asking that at least 95 percent of the tasks done by a worker had been approved. The workers' location was restricted to the USA. A total of 641 responses were collected.\nResults: Manipulation checks\nDiscussion: Drawing insights from community psychology, consumer psychology and marketing research, this study investigates whether the personal power and power gained through the new media transfer into a revenge behavior when consumers complain publicly online after encountering an unsatisfactory experience with a company. This paper contributes to the body of knowledge by introducing the framework of psychological empowerment and noting its association with online public complaining behaviors. Several theoretical implications can be drawn from the study findings, as discussed below.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to examine how psychological empowerment affects individuals' likelihood of publicly punishing a company with whom they had unsatisfactory experiences through online complaining behaviors.\nDesign/methodology/approach: A 3 (intrapersonal empowerment: high/low/control) by 3 (interactional empowerment: high/low/control) online experiment was designed using the priming technique. Following the priming tasks, participants were given a scenario in which a restaurant failed their expectations followed by dependent and control measures.\nFindings: Results revealed a significant main effect of interactional empowerment: participants in the low interactional empowerment condition reported being less likely to engage in the revenge-motivated online public complaining behaviors than participants in the control condition. The study also found a significant interaction effect between interactional and intrapersonal empowerment.\nPractical implications: The study findings yield practical application for crisis management and relationship management. Understanding the linkage between power and online complaining behaviors should help corporate communication professionals to better perform risk assessment, environmental scanning and crisis communication and management.\nOriginality/value: Limited empirical studies have investigated the linkage between empowerment and online complaining behaviors in the consumer context. The present study fills this gap by conceptualizing online public complaining as a revenge-motivated behavior. The study yields both theoretical and practical implications.\n", "role": "assistant" } ]
Implementation and use of B2B-enabling technologies: five manufacturing cases
[ "Business‐to‐business marketing", "Manufacturing industries", "Supply chain management", "Electronic commerce", "Communication technologies" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: The efficient and effective management of the supply chain has increasingly become a primary focus for organisations worldwide. For many years, organisations have been faced with the problem of trying to maximise the efficiency and effectiveness of their operations, when often their fate was not entirely within their scope of control. Reliability and quality of supply of inputs has long been a source of potential problems for organisations attempting to compete on the basis of quality, cost and/or delivery. On the demand side, the accuracy and timeliness of information that could improve knowledge of real demand patterns has also historically been a problem for all types of organisations. The standard response to this issue has been to attempt to use both qualitative and quantitative methods to improve forecasting accuracy. At best these methods are usually approximations based on historical data, and as such are based in the assumption the future will be like the past. Unfortunately this is less and less the case for many companies as they find the business environment changing around them. Rapid changes in technology, changing demographics, globalisation of both markets and industries and requirements for customised products have all combined to refocus awareness on the need to optimise performance of supply chains rather than individual organisations. Companies have become increasingly aware of the fact that the needs of all stakeholders can be better served through this broader, more strategic approach. Technologies such as electronic data interchange (EDI), barcoding and product numbering have been in use for many years, particularly in retail supply chains, as a means of linking information flows to the physical movement of goods. As such, they represent technologies that have been developed to help mitigate some of the problems described above, and in fact provide a vital link between technology and supply-chain management methodologies. What has come to be known as e-commerce has largely been the result of application of multiple technologies to link flows of goods and information about those goods. It is therefore somewhat surprising that while some organisations have seen the application of such techniques as an opportunity to streamline logistics flows, reduce inventory and improve customer service, their acceptance has been far from widespread (Fernie, 1995). Many organisations have only implemented in response to requests (or directives) from trading partners. The importance of new technology in facilitating this process has been further highlighted over the past decade, and in particular over the past five years, by the development of accessible, affordable and practical means by which organisations can transact business electronically.Many benefits are therefore proposed for companies that develop and implement internet-based strategies in the management of their supply chains. A report released by the New South Wales Department of State and Regional Development suggests some of these include reduction in impact of geographical boundaries, reduction in operating costs, the ability to \"lock-in\" customers and the lowering of entry and overhead costs (New South Wales Department of State and Regional Development and Microsoft Australia, 2000). In discussing the reduction of geographic barriers, the report does make the point that this is very much an information issue, and that in effect it amplifies the need for coherent and coordinated physical logistics strategies. Information flows (and easily digitised items such as trading documentation) between trading partners will be greatly facilitated through the use of the internet, but the management and integration of physical logistics will still provide significant challenges. This view is supported throughout the literature on the management of the supply chain (Bowersox and Calantone, 1998; Bender, 2000; The Australian Financial Review, 2001; Bytheway and Braganza, 1992; Handfield and Nichols, 1999; Herridge, 1997; Natarajan, 1999; Schwarz, 1998; Magretta and Dell, 1998).Putzger (1998) states that the key criterion in implementation is correct choice of information technology, and that the use of third party providers for both transportation and information management is the option chosen by successful performers. Bowman (1997) states that many companies fail in the initial stages of implementation because they simply are unable to come to agreement on terms. In describing a supply-chain management implementation in a European company, Hammant (1997) lists seven critical success factors they identified through this process:1. a committed organisation, from the board down;2. effective programme management;3. consistent, pre-emptive communications;4. positive action to identify and manage key risks before they become issues;5. a well defined and managed programme baseline, changed as necessary;6. a succession of manageable delivery milestones to maintain momentum and confidence; and7. an actionable, owned, manageable and measurable set of business benefits.Gourley (1998), (in citing the implementation of supply-chain-management techniques across 15 distribution centres (DCs) of the Mopar Division of Chrysler Corporation in the USA), makes the point that involvement of the DC staff, as well as suppliers, in implementation has been of critical importance. As part of the ongoing management of the program the company actively encourages the involvement of staff in decision making, as well as soliciting input from suppliers on potential productivity improvements. This supports the earlier discussion on the importance of partnerships and cooperative arrangements for successful supply-chain management. Larkins and Luce (2000) recommend the following \"stepping stones\" for an orderly implementation of supply-chain-management practices in the pulp and paper industry:* start small, with a single link with which you can have a good relationship;* start internally with a single business process;* focus on long-term, sustainable and cost effective business improvements that will benefit both you and your link; and* educate staff and promote buy-in from stakeholders.Moller (2000) provides some support for these criteria in documenting the implementation of an advanced planning and scheduling (APS) system at LEGO in Denmark. The three major lessons learnt in this case were:1. do not be too ambitious with the timing and expectations for rapid results;2. ensure accurate alignment between requirements and system functionality; and3. critical importance of data accuracy.In planning for implementation of supply-chain-management systems, Parnell (1998) recommends the following factors that need to be taken into account:* Ensure that both process and system improvements are covered. Expensive new tools can become worthless if business processes do not support them.* Fix your own internal supply chain first before you attempt to do anything with trading partners.* Education in two areas: demand management and system optimisation.* Understand the effect on current business systems, how behaviours will need to change, and how performance measures will need to change to reinforce desired behaviours.* Do not let \"best\" get in the way of \"better\".Tyndall et al. (2000) indicate that there are three critical elements that need to be evaluated and balanced to ensure a successful implementation. These are value (relationship between cost and benefits), risk (probability of success - dependant on time span for tangible results), and method (the approach adopted by the company to balance value and risk). Some common issues they identify relating to these three elements are:* To ensure value companies need to be realistic about benefits. Many companies either grossly over estimate the benefits, or under estimate the time needed to implement and realise these benefits.* Companies need to be very critical about costs. Included should be internal and opportunity costs, real inventory costs, subcontractor costs, systems costs, support costs and asset costs.* Mitigate risk by focusing on short-term projects. The justification for this comes from the fact that it will be easier to set action plans, targets and specific time horizons for short-term projects. Long term projects run the risk of having costs and timing under estimated.* Implement in stages and avoid the temptation of trying for a \"silver bullet\" solution. They advise attempting to tackle small parts of larger problems, rather than attempting to solve one large problem in one hit.* Understand the supply chain first, take care of basics such as data accuracy at an early stage, and be prepared to learn from the implementation process.In summarizing their approach, they state:The value of working in stages and by segments is best captured in a wholly counterintuitive maxim: do less with more. In other words, put more resources onto fewer, more implementable initiatives, and make them accountable for results (Tyndal et al., 2000, p. 59).In the context of emerging technologies and the implementation of e-commerce applications, Froehlich et al have identified that most companies will have to address some basic structural issues prior to implementation:E-commerce places new demands not only on delivery technology, but on the way that business processes are designed. At present, technology is forcing organisations to embark on e-commerce before they have built a coherent model of the business processes they need (Froehlich et al., 1999, p. 473).This issue of getting the basic business processes right either as a precursor to, or an essential part of, implementation has also been identified as a barrier to implementation. An Andersen Consulting (1994) report has identified inaccurate data, existing systems infrastructure and entrenched business practices as the major barriers to implementation of advanced technologies and innovative management approaches. The importance of getting existing processes in line with new technologies and methodologies serves to highlight the importance of planning, as well as supporting the use of standardised frameworks for implementation. This paper seeks to examine some of these implementation related issues in the context of manufacturing organisations. As such, it addresses the following research questions:* What objectives and expectations do manufacturers have when implementing the EAN system?* What levels of planning are involved and what priorities drive investment decisions?* What benefits and limitations are encountered as a result of implementation?* How do organizational and cultural issues affect the implementation process in manufacturing companies?* What are the perceived future trends in development of business-to-business (B2B)-enabling technologies relevant to the manufacturing sector?Five companies are examined in turn to assess how each has approached implementation of B2B-enabling technologies, and to compare usage and application for the management of their supply chains.\nMethodology: In order to provide a framework for this project, the model on which the research was based was that of the \"EAN System\". EAN Australia is the organisation that administers, validates and issues EAN (European article numbering) standard barcodes to Australian companies. As well as promoting the use of these barcodes, EAN promote a system for the adoption and implementation of electronic commerce and supply-chain management. This uses a combination of EAN numbering, barcoding and EDI type technologies to link the flow of physical goods with the flow of information through a supply chain. This system, therefore, provides a set of integrated technological solutions of the type described in the introduction, where e-commerce-enabling technologies can be applied to the management of supply chains. The membership of EAN Australia, therefore, provides an accessible and relevant context within which to conduct the research.Five case studies were conducted in manufacturing companies who are members of EAN Australia. As members of EAN Australia, the case study companies were using (to some extent at least) some aspects of the EAN system, and thus involved in using e-commerce-enabling technologies. The five companies that consented to being part of the study came from a group of thirty companies randomly selected from the EAN Australia membership list.The case studies were conducted for the purpose of identifying some of the major issues relating to the implementation of the EAN system, and electronic commerce generally. In order to provide this focus, and to document \"... the procedures and general rules that should be followed\" (Yin, 1994, p. 63), a case study protocol was developed. Following are the major areas of interest for the conduct of interviews (as captured in the protocol):* Organisation profile. This section captured the organisation specific structural issues, product/service offerings, competitive and market issues and industry critical success factors. As such, a general demographic background for each organisation was developed.* Ban system background. Objectives and expectations from implementation, planning for implementation, techniques in use, investment priorities, and other B2B e-commerce related issues.* Business outcomes from implementation. Benefits derived and limitations encountered as a result of implementation.* Implementation issues. Levels of employee involvement, attitudes toward implementation within the organisation, company specific issues relevant to the implementation process, and some assessment of the culture within the organisation.* Future trends. Each interviewee was asked to provide some insight as to what they saw to be future directions in electronic commerce.Interviews were conducted with at least one person from each company, being the nominated BAN system co-ordinator for that organisation. Interviews were conducted with a combination of site visits and phone calls.\nAnalysis and comparison of case studies: Organisation profiles\nEAN system background: Objectives and expectations of implementation of the EAN system\nBusiness outcomes from implementation: Summary\nConclusion: These five case studies highlight both the potential for B2B-enabling technologies to facilitate improvements in manufacturing operations, and the reality of their current limitations. On the one hand, all five had invested substantial resources in the implementation and use of existing technologies (e.g. barcoding, EDI), and also saw the future being further dominated by use and application of emerging technologies (e.g. the internet) for the improved management of supply-chain operations. On the other hand, evidence of extended application of these technologies backward along their supply chains (i.e. with suppliers) was limited, despite a general recognition that this would yield further benefits. Although the reasons for this varied from company-to-company, one strong theme to also emerge was that business benefits could be as much attributed to complementary methodologies (cooperative arrangements, cross-functional involvement, etc.), as to the use of the technologies themselves. Access to POS data (as close as possible to real time) was also seen to be a critical aspect of the system that was not as readily available as it should be, and as illustrating a bias toward value being captured at the demand end of the chain (i.e. by retail trading partners). As manufacturers, these companies understand the importance of this type of information reflecting real demand patterns. As such, the cases point to the critical nature of information in this area, and to the need for appropriate relationships between trading partners to underpin and enable data flows between trading partners. The view of the future also pointed to the more effective management of the supply chain being further enabled by the application of internet technologies. This view was balanced, however, by the recognition that in order for these technologies to provide significant value, collaboration between trading partners would also need to be a key point of focus. The evidence that POS data was not easily available to these companies highlights the importance of collaboration and trust in this context. The technology for sharing this information has been available for many years, and yet the propensity for it to be shared appears more a function of relations between trading partners, than of the technology itself.\n", "role": "user" }, { "content": "Purpose: - To determine the extent to which long-established technologies (such as barcoding and electronic data interchange (EDI)) have been applied across their supply chains, identify factors influencing implementation, discuss future trends for new technologies, and document the business benefits accrued.\nDesign/methodology/approach: - Five case studies were conducted in manufacturing companies who are members of EAN Australia. As members of EAN Australia, the case study companies were using (to some extent at least) some aspects of the EAN system, and thus involved in using e-commerce-enabling technologies.\nFindings: - Indicates that the use of technology is largely restricted to the demand (in this study retail trading partners) side of the supply chain, despite a general understanding of the potential benefits of implementation backward through the supply side. There is general agreement that emerging technologies (such as the internet) will provide further opportunities, and at the same time increase pressure for extended use.\nResearch limitations/implications: - Being limited to organizations operating in the Australian fast moving consumer goods (FMCG) sector, the results therefore need to be read in this context. It would be useful if these findings could be compared with those from other countries and different industry sectors.\nPractical implications: - It is apparent that for manufacturers a critical loop is often not closed due to the limited availability of point of sale (POS) data from major trading partners, and that this has served to limit value capture (by the manufacturers) through extended application.\nOriginality/value: - Highlights both the potential for business-to-business (B2B)-enabling technologies to facilitate improvements in manufacturing operations, and the reality of their current limitations. An important emergent theme was that business benefits could be as much attributed to complementary methodologies (cooperative arrangements, cross-functional involvement, etc.), as to the use of the technologies themselves. As such, the cases point to the critical nature of information in this area, and to the need for appropriate relationships between trading partners to underpin and enable data flows between trading partners.\n", "role": "assistant" } ]
Exploring the vacuum in training evaluation: is this a case of mission impossible?
[ "Corporate strategy", "Strategic alignment", "Training evaluation", "Human resource management" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Human resource professionals recognise the importance of gaining feedback from learning events, but research reports question the thoroughness of evaluation processes, claiming they rarely happen to the satisfaction of senior managers (Horwitz, 1999; Sloman, 2004). Consequently, training budgets become an easy target for managers during periods of cost rationalization. One problem facing learning professionals is overcoming the complexity of defining a meaningful cause/effect relationship between the actual training and resultant benefit for the organization. The story is all too familiar, once the training has been completed an evaluation vacuum emerges until someone takes action to find out what new knowledge has been learned or what behaviour has changed since the learning event, but often this evaluation never happens. For the purpose of this examination, an evaluation vacuum is an innovative term that describes a conceptual gap, located somewhere between the actual learning event and a realization of the resultant benefits.\n1. Time from training: When is the best time to evaluate? The theoretical arguments are mixed, but a lapse in time from the point of training to evaluation makes it harder for learners to recall meaningful and accurate information, prolonging the evaluation vacuum. Peoples' memories fade and it can become difficult to show a direct relationship between the training event and any consequential changes in performance. Additionally, research findings among front-line managers showed that pressure of work, lack of time or poor knowledge of how to evaluate were given as the main reasons why training evaluation was overlooked\" (Short, 2008). Instead, managers would benefit from taking advantage of this \"window of opportunity\" to isolate and evaluate the specific benefits before newly acquired knowledge or skills blend with other learning events.\n2. Level of and type of learning - ease versus value: Taking guidance from Bloom's well-known taxonomy of learning (Bloom et al., 1956), we are reminded that learning based around knowledge acquisition and skill transfer can be more straightforward to evaluate than complex problem-solving or conceptual based learning. However, this information may be perceived to have a lower strategic value for organizations. For example, research among human resource professionals (in the New Zealand manufacturing sector) indicated that evaluating the benefit of culture change programs was much more complex than straightforward skill-based learning. This presents a paradox for evaluators and may account for the abundant amounts of data generated in course evaluation reports, but a lack of meaningful and reliable evidence at the strategic level, where investments in learning and development can be significant (Short, 2008).\n3. Number of projects: The nature of organizational development means that several training initiatives are frequently integrated, adding a level of complexity to the evaluation vacuum. Integrating training projects can make it more challenging to obtain meaningful information from one specific intervention. The resultant effect of multiple training projects produces a \"learning bleed\". Although this situation offers a solid platform for holistic and seamless development, it makes a specific return on investment evaluation very hard to calculate. Increasingly, in segmented organizations, business units undertake diverse training projects at the same time in order to accelerate business improvement. This strategy produces a higher potential for learning bleed, making the evaluation processes even more challenging. Consequently, the evaluation activities become sporadic and anecdotal, creating a tendency to pick-off those projects perceived as having least benefit. This is a high-risk strategy because without reliable information, how would managers know which projects to cancel? Figure 1 illustrates this scenario.\n4. Qualitative versus quantitative: The choice of philosophical paradigm has much bearing on the perceived value of an evaluation activity. People who prefer an objective approach to gathering data would value an evaluation activity based on gathering numerical/factual information. In this approach, direct observation, tests and assessments supply the data. Earlier we suggested that skill-based training can provide these outputs and make the evaluation activity more straightforward - conceptually minimising the evaluation vacuum. Conversely, qualitative evaluation often contains socially constructed information, is focused on meaning and tries to make sense of unfolding events. It calls for a deeper interpretation to establish the significance or question the appropriateness of information - and this process takes time. For example, some training interventions attempt to bring about a long-term change in culture as a means of improving organisational performance (OP). In these settings, qualitative evaluation activities are easy to avoid, due to the long timeline of gathering and analysing complex information, but they are essential to the overall process. A pragmatic solution for evaluators of OP training would be to include mixed methods of obtaining data, such as small-scale surveys, training reaction sheets, complemented with focus groups and climate surveys.\n5. Number of people involved: Few people would doubt that evaluating the training outcomes for one person is much easier than for an entire work team or organization - suggesting that smaller numbers of learners reduce the evaluation workload. Case study evidence indicates that small-scale evaluations are more likely to happen than major organisational projects (Short, 2008), yet evaluating single-event activities is thought to have less influence on determining strategic success - unless the individuals are senior managers, whose learning and performance affect the entire organization. Because the process of evaluating larger numbers of learners is complex; random feedback or structured techniques (such as nominal group) can help to reduce the workload. However, these techniques may be vulnerable to inconsistency or produce spurious results.\n6. Goal definition: Predicting the precise outcomes of a training intervention is challenging. Was the learning outcome clear or vague? In other words, did the training designer use a target rifle or a shotgun approach to determine learning objectives? Universal goals, such as general awareness training, add breadth, but they can provide shallow information and there is a strong possibility of \"learning bleed\" (see No. 3). Clearly defined learning goals, for an individual or specified program, are much more precise and have the effect of focussing the training with little or no \"learning bleed\". However, other complimentary or peripheral benefits of the training can be overlooked.\n7. Indirect variables: The following considerations, to a varying degree, cloud the picture and fill the evaluation vacuum with misleading information. In this context, the word \"indirect\" refers to the contribution made by people not connected directly with the learning. Examples of third-party contamination include:* senior managers asserting their organization has an open and honest culture, but the evaluation information becomes susceptible to misinterpretation or distortion as it progresses up and down the organization;* the influence of poor timing on an evaluation activity. It is easy to overlook vital information when the right people are not listening or do not receive details at the appropriate time;* people perceiving information differently. Important feedback can be lost if the evaluation has low validity among the stakeholders;* political and diverse interests in the outcomes can obscure the evaluation; and* historical events can produce a halo (enabling) or horn (barrier) effect.\n8. Content and volume of learning demand: Too much training content at one sitting or no opportunity to apply the learning afterwards clouds the evaluation picture and creates micro-vacuums. Sometimes, the lists of expectations following a training course are overwhelming and not possible to evaluate accurately against the course objectives. All too frequently, course objectives are pre-fixed with non-behavioural, obscure and/or immeasurable learning outcomes. Evaluators should, therefore, work closely with training course designers to set realistic learning demands and define resultant outcomes.\n9. Inability to separate the training from tacit learning or broader experience: When adult learners undergo training, they bring a wealth of accumulated experience from other aspects of their lives. After training, the learners assimilate their newly acquired knowledge or skill with their tacit knowledge to form a new level of understanding. Organizations rarely attempt to quantify or value the level of tacit knowledge within their employees - making it almost impossible to isolate the specific benefit of the training once \"learning bleed\" had taken place.In summary, the need for evaluating workplace training has never been more important, as tough economic conditions put training budgets under scrutiny. By using the nine key findings contained within this report, as a checklist for action, training practitioners will be able to:* adopt a focused approach to managing the risks associated with the evaluation vacuum;* develop a meaningful evaluation strategy that has validity; and* assume a critical and reflective mindset to further improve the credibility of evaluation methods.Practitioners already know from experience that communication vacuums in the workplace fill-up quickly with discontent, speculation and sometimes untruths. Being aware of, and minimising, the evaluation vacuum provides an opportunity to reduce these risks and allows for a more considered explanation of training outcomes.\n", "role": "user" }, { "content": "Purpose: - This paper presents research-based insight on the challenges of evaluating training activities in today's complex organizational settings.\nDesign/methodology/approach: - The research is taken from three case studies conducted in the New Zealand manufacturing sector, as well as sources of relevant literature. The commentary takes a critical-realist perspective and challenges learning and development professionals to address the poor reputation of training evaluation.\nFindings: - Human resource practitioners recognise the importance of gaining feedback from learning events, but research reports question the thoroughness of evaluation processes, claiming they rarely happen to the satisfaction of management. Consequently, training budgets become an easy target during periods of rationalization. The problem centres on overcoming the complexity of defining a meaningful cause/effect relationship between the training and resultant benefit. This research discovered the presence of an \"evaluation vacuum\" and nine thematic areas requiring close attention. The paper offers reasons why the evaluation of training is becoming increasingly difficult.\nResearch limitations: - The findings are contextual and may not fit all settings, but they offer a comparative account of training evaluation in both straightforward and complex learning environments.\nPractical implications: - The paper has real and practical implications for human resource professionals. Evaluation of training is not a trivial issue and organizations need to get much better at explaining the beneficial outcomes derived from investments in training.\nOriginality/value: - This paper will be of value to human resource professionals and managers, assisting them to think differently about evaluating training. Innovative concepts such as the \"evaluation vacuum\" and the term \"learning bleed\" clarify priorities and contribute to a new perspective on evaluation.\n", "role": "assistant" } ]
Effect of information sharing and process coordination on logistics outsourcing
[ "Performance", "Information sharing", "Logistics outsourcing", "Process coordination" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: With the development of e-commerce and global competition, logistics outsourcing has received more attentions from academicians and practitioners. Logistics outsourcing enables third-party-logistics (3PL) users to leverage external logistics resources (Dyer and Singh, 1998; Lavie, 2006; Bolumole et al., 2007) that may be difficult or costly to manage in-house (Lieb et al., 1993; Razzaque and Sheng, 1998; Cho et al., 2008). Logistics outsourcing can reduce non-core business investments (Razzaque and Sheng, 1998), improve logistics performance (Chen et al., 2010) and customer service (Rabinovich et al., 1999; Skjoett-Larsen, 2000), and support supply chain integration (Harrington, 1995). However, previous 3PL studies have shown that logistics outsourcing is often regarded negatively by 3PL users, as the use of external resources increases uncertainty in operations and customer service. Examples include the inability of 3PL providers to meet specific logistics requirements or future growth needs, incompatibility between the information systems of 3PL providers and users, and security issues (Razzaque and Sheng, 1998; Lau and Zhang, 2006; Ansari and Modarress, 2010). Given this mixed situation, further investigation is needed to understand the critical factors in logistics outsourcing.\n2. Theoretical background and hypothesis development: In the conceptual model (Figure 1), we propose that information sharing and process coordination, as integrative mechanisms, can facilitate the logistics outsourcing of 3PL users, which in turn, leads to performance improvements.\n3. Research methodology: 3.1. Questionnaire design\n4. Discussion and managerial implications: 4.1. Improving logistics outsourcing through information sharing and process coordination\n5. Conclusions and limitations: Based on the RBV perspective, this study develops a holistic model to examine two antecedents, information sharing and process coordination, and two consequences, operational and financial performance, of logistics outsourcing. Furthermore, we empirically test the antecedents and consequences of three types of outsourcing simultaneously, which further enriches our understanding of different logistics outsourcing activities. Our findings show that information sharing and process coordination should be considered when companies exploit external resources, because integrative mechanisms, as controls between companies, can leverage resources to create capabilities across boundaries.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to test how integrative mechanisms, including information sharing and process coordination, influence logistics outsourcing, and how logistics outsourcing influence performance from an extended RBV perspective.\nDesign/methodology/approach: The structural equation modeling (SEM) method is used to examine the proposed model, based on data collected from 361 companies in greater China.\nFindings: Integrative mechanisms are helpful for logistics outsourcing (basic, customized, and advanced outsourcing). Specially, information sharing contributes to customized and advanced outsourcing, but has no significant effect on basic outsourcing. In contrast, process coordination improves basic and advanced outsourcing, but insignificantly influences customized outsourcing. Besides, each type of logistics outsourcing has differently effects on 3PL users' performance. This study contributes to 3PL theories and practices.\nOriginality/value: This study empirically examines the antecedents and the outcomes of logistics outsourcing, contributing to 3PL literature and practices.\n", "role": "assistant" } ]
Monitoring processes through inventory and manufacturing lead time
[ "Operational performance", "Lean manufacturing", "Inventory", "OT-chart" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The issue of work-in-process inventory (WIP) in discrete manufacturing is not new. However, it remains as a highly topical question, which warrants further exploration. WIP is present in every manufacturing process that transforms raw materials into finished products. It is especially visible if the output of a workstation is transferred in batches to the following station. The traditional manufacturing paradigm in western countries considers that WIP is useful because it counterbalances uncertainty and variability in the process and it keeps the manufacturing process running. Besides, financial accounting considers inventory as a valuable asset and a source of profits. However, WIP entails handling and holding costs and thus the quest for a trade-off between costs and benefits of holding inventories has been offering opportunities for research for the last decades.\n2. Overview of relevant literature and research questions: 2.1. Lean manufacturing and the zero inventories paradigm\n3. Research approach: This paper is mainly based on the mathematical approach to management, which focuses on system analysis and decision making, aiming to identify and evaluate the effectiveness of processes and decisions. The primary focus of this approach is to develop an analytical model. Through this device, in Section 4.1, we consider a perfectly balanced discrete batch manufacturing system with N sequential operations, all with the same, and constant, performance time per piece C. The system has to fabricate/assemble a production run of Q parts (as previously scheduled). Each operation is performed (piece by piece) on m transfer batches of Q/m parts each. Taking into account the description of the system, we analytically derive expressions for LT for the first lot and for the following lots. In this paper, we consider that WIP includes parts waiting before a station to be processed, parts that wait to be moved to the next workstation after being processed and also parts being processed (despite the fact that Conway et al. (1988) do not consider parts being processed).\n4. Determination of WIP and manufacturing LT: 4.1. Model formulation for a serial balanced production system\n5. Recursive model formulation for imbalanced serial systems with different lots including setup time: 5.1. Model formulation\n6. Conclusions and implications: In this paper, we have presented a rigorous approach to compute the manufacturing LT and WIP for an item in a serial production system under some assumptions. The purpose of this study was to investigate the relationships between WIP, manufacturing LT, transfer lot size and other operational parameters in a specific context in order to check whether these relationships support the empirical principles of lean manufacturing. Such relations have been described by means of Equations (1)-(31). They provide answers to our research questions and support some empirical practices of lean manufacturing (Hopp et al., 1990):\n", "role": "user" }, { "content": "Purpose: Since lean manufacturing considers that \"Inventory is evil\", the purpose of this paper is to find and quantify the relations between work-in-process inventory (WIP), manufacturing lead time (LT) and the operational variables they depend upon. Such relations provide guidelines and performance indicators in process management.\nDesign/methodology/approach: The authors develop equations to analyse how, in discrete deterministic serial batch processes, WIP and LT depend on parameters like performance time (of each workstation) and batch size. The authors extend those relations to processes with different lots and the authors create a multiple-lot box score.\nFindings: In this paper, the relations among WIP, LT and the parameters they depend on are derived. Such relations show that when WIP increases, LT increases too, and vice versa, and the parameters they depend on. Finally, these relations provide a framework for WIP reduction and manufacturing LT reduction and agree with the empirical principles of lean manufacturing.\nResearch limitations/implications: Quantitative results are only exact for discrete deterministic batch processes without any delays. Expected results might not be achieved in real manufacturing environments. However, qualitative results show the underlying relations amongst variables. Different expressions might be derived for other situations.\nPractical implications: Understanding the relations between manufacturing variables allows operations managers better design, implement and control manufacturing processes. The box score, implemented on a spreadsheet, allows testing the effect of changes in different operational parameters on the manufacturing LT, total machine wait time and total lot queue time.\nOriginality/value: The paper presents a discussion about process performance based on the mutual influence between WIP and LT and other variables. The relation is quantified for the discrete deterministic case, complementing the models that exist in the literature. The box score allows mapping more complex processes.\n", "role": "assistant" } ]
Wage differentials between foreign invested and domestic enterprises in the manufacturing: Evidence from Vietnam
[ "Wage differentials", "Ownership", "Manufacturing", "FIE", "State owned enterprise" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: There is a vast literature on the nexus between foreign direct investment (FDI) and wage growth in developing host countries. A number of studies have analyzed the wage effects of FDI in East Asian economies (Lipsey and Sjoholm, 2004a; Ramstetter, 2004; Ramstetter and Sjoholm, 2006; Sjoholm and Lipsey, 2006). Some found that there were positive wage differentials between foreign multinational enterprises (MNEs) and domestic firms in these economies after controlling for various firm characteristics and the educational attainment of workers. However, aside from limited evidence in studies by Fukase (2014), Ramstetter and Phan (2007), and Tran (2007), there is very little evidence regarding wage differentials between foreign-invested enterprises (FIEs) and domestically owned ones in the context of Vietnam. Data unavailability has been an important reason for this crucial gap in the literature. This paper aims to fill that gap by providing analyses of manufacturing wages using a firm-level data set from Vietnam's Enterprise Survey for the period 2000-2009 which allows for examination of the wage differentials given the outward-oriented reforms over that period.\n2. Literature review: The international business literature posits that FDI has brought new capital, superior technology, and management skills to the host country. An additional benefit of FDI, though received less attention in a developing host country is that FIEs are likely to pay higher wages than domestic enterprises. The explanations of wage differentials between FIEs and domestic firms are varied. On the demand side, these FIEs - affiliates of MNEs - are often argued to possess firm-specific assets (Caves, 2007; Dunning, 1993; Markusen, 2004) comprising production technology, marketing networks, and management system that are in turn thought to make these foreign enterprises more efficient than domestic firms. Thus, FIEs tend to be relatively capital or technology intensive compared to domestic firms in developing host countries, requiring a considerable skilled workforce. These features can increase the likelihood of wage premium among FIEs.\n3. Data and descriptive statistics: This study employs the firm-level data compiled from unpublished returns to the Enterprise Survey conducted by the GSO, Vietnam. First, this Enterprise Survey covers all non-household firms above a workforce of ten employees and a representative sample of smaller firms. The survey provides firm-level information on output, employment, wages, capital stock, intermediate inputs, and firm-ownership categories. Second, the survey coverage over time may not be complete and coverage appears to vary over time (Ramstetter and Phan, 2013)[1]. Third, data on each firm are coded and stored individually in a unique identification unit by tax code and the Vietnamese Standard Industry Classification (VSIC) at the four-digit level. Finally, the coverage of the raw data that is used to construct a panel data in this study is likely to be reasonably good compared to surveys/censuses in other developing economies of similar income levels.\n4. Econometric evidence on wage differentials in Vietnamese manufacturing: The previous discussion suggests that wage differentials in the manufacturing sector are associated with firm ownership categories. On the whole, FIEs pay relatively higher wages compared to domestic firms while DP firms have the lowest wages. Moreover, it appears that the wage differentials could be reliant on a firm's characteristics such as its size and its capital intensity as well as industry-specific factors. Therefore, I continue with an econometric analysis to examine the effect of firm ownership on manufacturing wages by explicitly controlling for these factors.\n5. Conclusions: This paper has examined determinants of inter-firm wage differentials, in particular the FIE-domestic wage differentials in Vietnamese manufacturing using firm-level data over the period 2000-2009. The wage differentials were first investigated through a comparison of manufacturing wages among five ownership types. Next, the FIE-domestic wage differentials was examined by estimating the determinants of manufacturing wages with a special reference to the intra-industry wage differential using firm-level panel data.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to examine the impact of trade and investment liberalization on the wage skill premium between skilled and unskilled workers in Vietnam.\nDesign/methodology/approach: An analysis is undertaken by means of descriptive statistics and econometric investigation using a firm-level data set from the Enterprise Survey of Vietnam.\nFindings: It is shown that there has been a positive wage differential between foreign-invested enterprises (FIEs) and domestic enterprises over the period 2000-2009. More importantly, the FIE-domestic wage differentials are found to be significantly positive after accounting for differences in capital intensity, size, firm location, and industry features. Furthermore, statistical evidence shows that these wage differentials narrowed over the period 2006-2009.\nOriginality/value: One of the first study examines the FIE-domestic wage differentials given the outward-oriented economic reforms since 2000 in Vietnam.\n", "role": "assistant" } ]
The critical factors of shopping malls in urban complexes in China
[ "Soft factors", "Hard factors", "Customer behavior", "Customer satisfaction", "Shopping mall", "Urban complex" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: China's economy has been on a rising trend in recent years. As a result, the property prices have been surging fast, too. Therefore, the government has promulgated various regulations and policies to re-adjust the residential property market. This caused developers and investors to switch to the commercial property market. In particular, urban complex is a popular mode exhibited by developers. Urban complexes have large scales and are located in core areas in the major cities. More importantly, they combine different functional spaces through a group of buildings. Hence urban complexes have been developing rapidly recently.\n2. Literature review: Customer satisfaction is a critical issue for both customers and shopping mall management (Anselmsson, 2006). The marketing concept says, \"the key to achieving organizational goals consists in determining the needs and wants of target markets and delivering the desired satisfactions more effectively and efficiently than competitors\" (Kotler, 1991, p. 10). Within this theoretical framework, one fundamental assumption is that if a customer is satisfied with a product, that customer is likely to remain loyal to the product in question (Anselmsson, 2006). According to Anselmsson (2006), customer satisfaction with a shopping center may be viewed as an individual's emotional reaction to personal evaluation of the total set of experiences encountered at the shopping center. That total set of experiences is rather diverse: shopping, catering, travelling to and from the center, etc. Customer satisfaction is important because a satisfied customer is likely to send more business back to the company, whereas an unsatisfied customer may be more apt to prevent the business from soaring. When a customer is satisfied with a product or service, he/she is likely to spread the word and help increase the company's profits (Ask.com, 2015). Mohsan et al. (2011) found that customer satisfaction had significant impact on customer loyalty and intentions to switch: the higher the customer satisfaction is, the higher the customer loyalty is and the lower the intentions to switch are.\n3. Our framework: This section introduces the framework of study as shown in the following Figure 1.\n4. Data source: This section describes the data source. A questionnaire consisting of two parts is constructed:\n5. Tests and results: 5.1 Ranking analysis\n6. Conclusion and implications: This paper identifies the critical factors of a shopping mall in an urban complex in China from customers' perspective, using Nanjing Wanda Plaza as a typical example. The main results are summarized as follows:\n", "role": "user" }, { "content": "Purpose: As the Chinese Government is planning to transform the economy from an export-oriented economy into a consumption-oriented economy, the impact of Chinese consumers to the economy will become more and more important. However, there is a lack of literature on Chinese consumers' behavior and the critical factors of shopping malls in China. Hence, this study aims to determine the critical factors of a shopping mall in an urban complex in China from customers' perspective, using Nanjing Wanda Plaza as an example for our case study.\nDesign/methodology/approach: This study carries out ranking analysis and factor analysis to determine the critical factors of the shopping mall. Then cluster analysis is applied to divide the customers into three segments, showing the importance of each factor to different customer segments. Furthermore, correspondence analysis is conducted to investigate the relationship between customer segments and customer characteristics (gender, occupation, age and income). This method can show how customer characteristics affect the critical factors of the shopping mall.\nFindings: Sensual enjoyable shoppers consider the \"soft factors\" to be superior to the \"hard factors\", whereas the pragmatic shoppers are just the opposite.\nOriginality/value: This study can serve as a useful reference for developers in designing shopping malls in urban complexes to attract more customers.\n", "role": "assistant" } ]
The effects of relationship maintenance and relationship investment on self-service technology relationship performance
[ "Technology adoption", "E-marketing", "E-service", "Service quality (SERVQUAL)" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: In recent years there has been a dramatic increase in the interactions that occur between humans and machine. As a result, the notion of self-service technology (SST) has been drawing a lot of attention from both academics and practitioners, because it is changing the intrinsic practice of service delivery in many industries, such as banking, tourism, and retailing (Johns, 2014; Cunningham et al., 2008). More and more businesses are now launching SSTs to encourage consumers to produce services for themselves without staff involvement (Meuter et al., 2005). This has led to a decline in the importance of interpersonal, face-to-face, relationships in a service delivery context.\nTheoretical background and conceptual framework: SST in retailing; in-store kiosk\nResearch hypotheses development: The influence of dedication-based relationship maintenance on perceived relationship investment\nMethod and results: Pilot test\nConclusions and implications: Discussions\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to extend the utilitarian value of the dedication-based relationship maintenance mechanism of social exchange theory and customer perceived relationship investment to investigate the relationship performance of a retailer launching a self-service technology (SST). Computer anxiety and time consciousness are hypothesized to moderate the effects among these relationships.\nDesign/methodology/approach: The results of the structural equation model, with in-store kiosk use experience data collected for 211 respondents, supported the research model. Multiple regression analysis was used for testing the moderating effects.\nFindings: The utilitarian value of dedication-based relationship maintenance is related to perceived relationship investment. Higher levels of customer-perceived relationship investment impact relationship performance. Computer anxiety and time consciousness act separately as both partial and full moderators.\nResearch limitations/implications: First, this study did not consider different kinds of products/services to have different effects with regard to customer cognition. Second, most of the respondents were students, and this is a limitation in business research, because of such factors as lower incomes and higher information technology ability as compared to individuals with other occupations. Third, it is difficult to distinguish whether the level of perceived convenience is due to the convenience stores per se or the in-store kiosks that they have. Future research may thus consider analyzing in more detail how perceived convenience is evoked. Finally, future research can consider constraint-based relationship maintenance mechanisms with regard to operating in-store kiosk businesses.\nPractical implications: Retailers who are willing to continually launch SSTs should tie such efforts to their relationship marketing strategies. Moreover, retailers who are willing to launch e-businesses should establish strategies designed to enhance customer experience with regard to the use of technology. Finally, launching SSTs should involve the continual development of an effective purchasing process and functional relationship marketing strategies.\nOriginality/value: This paper can help managers organize relationship maintenance mechanisms, especially with regard to the development of user utilitarian value, in order to obtain improved relationship performance.\n", "role": "assistant" } ]
Determinants of tax stamp evasion in rural districts in Ghana: A case of Twifu Atti-Morkwa and Hemang Lower Denkyira districts
[ "Ghana", "Informal sector", "Binary and multinomial logit models", "Evasion", "Tax stamp" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: The provision of public goods, services and infrastructure is a basic function of governments in all countries around the world. The ability of a government to provide adequate public goods largely depends on tax revenue mobilization (Rakner, 2001). In recent times, generating adequate tax revenue to finance public goods in Ghana has been a challenge as majority (about 86 per cent) of employed Ghanaians are found in the private informal sector (Ghana Statistical Service [GSS], 2012). The existence of large informal sector brings about high level of tax evasion because most of the informal operators do not register with Ghana Revenue Authority for identification, assessment and collection of tax (Adu and Amponsah, 2017). Using the Stock-Watson DOLS estimate for currency demand, Asante (2012) estimated that evaded tax in Ghana attributed to underground economy (informal sector) from the period 1990 to 2010 ranges from 4 per cent to about 14 per cent of the official economy. Terkper (2013) also predicted that, out of about 1.5 million Ghanaian tax population paying direct taxes, the contribution by the informal sector represents only 5 per cent of the number. Indeed, the aforementioned studies indicate that the sheer size of the Ghanaian informal sector poses a threat to tax revenue mobilization.\nLiterature review: Tax stamp system in Ghana\nMethodology and hypotheses: Study area\nResult and analysis: To ensure that no two variables are linearly dependent, the multicollinearity test was done. The mean VIF of 1.09 obtained for this study shows that there is no problem of multicollinearity. Table II shows the result of the VIF test.\nConclusion and recommendation: This study analyzed the effects of economic and non-economic factors on tax stamp evasion amongst micro-taxpayers in Twifu Atti-Morkwa and Hemang Lower Denkyira districts. Tax stamp evasion was measured in two different ways: as a binary variable (evasion/no evasion) and as an ordered variable (no evasion, partial evasion and full evasion). Effects of economic and non-economic factors on tax stamp evasion as a binary variable were analyzed using a binary logit, whilst multinomial logit was used to analyze tax stamp evasion as an ordered variable. Based on the analysis of the binary logit, it can be concluded that the likelihood of micro-taxpayers to evade tax stamp is predicted by age and transportation cost.\n", "role": "user" }, { "content": "Purpose: The purpose of this study is to examine tax stamp evasion at Twifu Atti-Morkwa and Hemang Lower Denkyira districts in the central region of Ghana.\nDesign/methodology/approach: A cross-sectional survey design was adopted to sample 305 micro-taxpayers through the use of multi-stage sampling technique. Primary data were collected from the micro-taxpayers using structured interview. Binary and multinomial logit regression models were used to regress the tax stamp evasion on economic and non-economic factors.\nFindings: The study found that the likelihood of micro taxpayers to evade tax stamp is predicted by age, application of sanctions, guilt feeling, transportation cost to tax office and rate of tax audit. Thus, the study found partial support for expected utility, planned behaviour and attributory theories in explaining tax evasion behaviour of micro-taxpayers.\nPractical/implication: There are several measures of addressing tax evasion behaviour of micro taxpayers. Evasion behaviour can be deterred by enforcement strategies such as application of sanctions and regular tax audit, establishment of more tax offices in the districts and writing normative messages on the faces of tax stamp stickers.\nOriginality/value: This study helps explains the tax evasion behaviour of micro-taxpayers of a developing economy like Ghana using a special type of tax design meant to capture such taxpayers in the tax bracket. To the best of our knowledge, the study is unique in terms of the means of measuring tax evasion and the methodologies used.\n", "role": "assistant" } ]
Optimal availability of failure-prone systems under imperfect maintenance actions
[ "Maintenance", "Maintenance reliability", "Preventive maintenance", "Engineer availability process" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In the current context of an increasingly wild competition, the constraints of time, quality and cost imposed on the industrial companies put them in front of the obligation to ensure a maximum availability of their production equipment at the lowest cost. Under these conditions, the implementation of preventive maintenance strategies proves to be inevitable.A maintenance strategy is defined as a decision rule that establishes the sequence of actions to be undertaken according to the state of the system. With each maintenance action one associates, a cost, a duration and a certain quality or efficiency. The performance of a strategy is then generally evaluated in terms of the average total cost on a given horizon or in terms of the system stationary availability. Other performance criteria are proposed in the literature.A great number of publications propose several maintenance policies, implying different types of preventive and corrective actions, such as inspections, replacements by new or used identical equipment, overhauls or repair to the as good as new state, the minimal repair which consists in bringing the system back to the same operating state as just before failure (as bad as old state), etc.Basic maintenance strategies have been proposed in Barlow and Proschan (1965). A great number of publications on the subject followed with maintenance policies based on complex mathematical models using the renewal theory and many other stochastic processes. There are several reviews of the literature on the subject summarising the various models and gathering them in different classes. Among these reviews, we find those of Valdez-Flores and Feldman (1989) and Wang (2002).Various maintenance models consider that maintenance actions are perfectly executed. Actually, the effectiveness of maintenance actions is generally between the two extreme limits (\"as good as new\" and \"as bad as old\"), what is generally called imperfect maintenance. Several models of imperfect maintenance were proposed in the literature. They can be classified in two categories: models based on an arithmetic reduction of the age of the system (Malik, 1979; Kijima et al., 1988; Kijima, 1989), and models of reduction of the intensity function (Chan and Shaw, 1993; Shin et al., 1996).With regard to the reduction of age, the improvement of the state of the system, following a maintenance action, is equivalent to an arithmetic reduction of its age. For the models of reduction of the intensity of failures, the improvement of the state of the system, following a maintenance action, is equivalent to a reduction of its failure rate of a quantity proportional to its value right before maintenance. A literature review on imperfect maintenance can be found in Pham and Wang (1996) and Wang and Pham (2006).In this paper, we consider a general industrial framework where preventive and corrective maintenance actions are imperfect. In fact, we do not always find the best-qualified technicians, nor the most suitable tools or spare parts to carry out maintenance actions. We consider in this context a periodic preventive maintenance strategy applied to a single-unit system prone to random failures. A similar maintenance policy has been studied by Wang and Pham (2006) considering cost optimisation in a context of maintenance actions with negligible durations. Here, we model and optimise the system stationary availability for this maintenance policy taking into account the durations of maintenance actions. Moreover, due to the computation complexity of the quasi-renewal function used in the mathematical model, we propose a new numerical algorithm allowing the computation of this key quantity for systems whose times to first failure follow any given distribution.In next section we define the strategy and establish the expression of the system stationary availability. We also prove that there always exists an optimal preventive maintenance period, T*, maximising the system stationary availability for any given set of parameters regarding the maintenance actions duration and the system failure distribution. A sufficient condition for the uniqueness of an optimal solution is also derived. The system asymptotic average availability calls upon the quasi-renewal function whose analytical expression is very difficult, even impossible, to obtain in a closed form. In section 3 we develop a computational procedure of this function for any given quasi-renewal process with non-negligible constant repair time. Numerical examples are presented and corresponding results are discussed in section 4. The paper is concluded in section 5.\n2. Strategy definition and mathematical model: We consider a randomly failing system with a known continuous lifetime probability distribution. An imperfect preventive maintenance action is periodically performed (every T time units). It follows a (p, q) rule, which means that it restores the system to the \"as good as new\" state with probability p and it keeps it in the \"as bad as old\" state with probability q (q=1-p). An estimation method of the probability p in practice can be found in (Whitaker and Samaniego, 1989). If the system fails between successive preventive maintenance actions, it undergoes an imperfect repair action of non-negligible duration after which the system failure inter-arrival time reduces to a fraction of its immediate previous one (all successive lifetimes being independent). Lifetimes follow a decreasing quasi-renewal process (Wang and Pham, 1996, 1997) with parameter a (0<a<1).The preventive and corrective maintenance actions have respectively constant (deterministic) durations Tp and Tc during which the system is unavailable.We develop, in what follows, the expression of the system stationary availability under these conditions. This expression will allow finding the optimal period T* for undertaking a preventive maintenance so as to maximise the system stationary availability. We will first define the quasi-renewal process in the context of our study.2.1 The quasi-renewal process with non-negligible repair duration\n3. Computational procedure of the quasi-renewal function: We propose a numerical algorithm which is a generalisation of the algorithm developed in Chelbi and Ait-Kadi (2000) designed to compute the n-fold convolutions of a given distribution function to evaluate the probabilities of shock occurrence within the classical renewal theory framework. The new algorithm computes the convolution products used in equation (2) to express the quasi-renewal function: Equation 16 Recall that T1 is the random variable associated to the time to first failure of the system distributed according to any probability distribution FT1(.).The expression of the quasi-renewal function, proposed in equation (3), can be developed as follows: Equation 17 where * represents the convolution product operator.The proposed algorithm recursively generates continuous numerical approximations to each member of the family of functions FSSn(.) derived from a distribution function FT1(.) and its associated density function fT1(.)according to the following recursive scheme: Equation 18 with fTn(t)=(1/an-1)fT1(t/an-1) for n>=1 (Wang and Pham, 1996, 1997)In order to evaluate FSSn(t), we use the common numerical representation of a function as a set of m+1 tabulated values of the independent variable at known equally spaced intervals of length h, with a cubic spline interpolation to supply immediate values. The choice of a cubic spline interpolation is motivated by the fact that it preserves positivity and monotonicity (Cleroux and McConalogue, 1976, Schoenberg and Shisha, 1967), which is essential when approximating distribution functions.The calculation of the tabulated values for FSS1(t) is based on the analytical identities: Equation 19 which can be expressed as: Equation 20 We use the Gauss-Legendre quadrature formula to evaluate these partial range integrals. Equation (12) is used to calculate FSSn+1(t) where the integrand is the product of fTn+1and the cubic spline approximation to FSSn(t). Equation 21 In this formulation, FSSn(t) is defined at points jh and (jh+h). The intermediate values required for the Gauss-Legendre formula are supplied by the cubic spline interpolation.Hence, in order to evaluate the quasi-renewal function (equation (9)), one has to evaluate FSSn(t) for n=1, 2, 3, ... Equation 22 Equation 23 where F1(t-Tc) is given by: Equation 24 Hence: Equation 25 The same procedure used above to evaluate FSS2(t) for n=1 will be used to evaluate the successive terms FSSn+1(t) for n=2,... determining successively: F1(t-nTc), F2(t-nTc), ... , Fn(t-nTc). Equation 26 Equation 27 ... Equation 28 Thus: Equation 29\n4. Numerical examples: For illustration purpose of the considered maintenance policy, let's consider randomly failing systems whose times to first failure follow a Gamma distribution with shape parameter a=3 and scale parameter b=3.The following durations are considered:* Duration of the corrective maintenance actions: Tc=0.35 time unit.* Duration of the preventive maintenance actions: Tp=0.15time unit.The corrective maintenance actions are characterised by a constant efficiency factor a=0.8For the considered systems and for different values p of the probability to have perfect preventive maintenance actions, Table I shows the optimal preventive maintenance periods T* which maximise the system availability SA(T).Figure 5 shows the evolution of the stationary availability SA(T).Notice that, as it should be expected, as preventive maintenance actions get more efficient (increasing p), the optimal values of T* get larger (PM actions should be performed less frequently). Moreover, efficient PM actions allow more flexibility when implementing the optimal policy in practice. Indeed, as it is clearly shown by the availability curves, a given deviation from the optimal solution (in terms of PM period T) yields significantly less availability loss as PM actions get more efficient.Considering the same input parameters, and the system with the Gamma time to failure distribution for illustration purpose, Figure 6 shows that for a given efficiency degree p of preventive maintenance, as the corrective maintenance actions become more and more efficient (increasing a), the optimal preventive maintenance period T* becomes larger and the corresponding system availability SA(T*) increases. It is also interesting to notice that beyond a certain level of the repair efficiency factor a, the optimal availability variations tend to stabilise. This allows concluding that it is possible to tolerate a certain degree of ineffectiveness of the corrective actions without having a notable availability loss. This flexibility increases as preventive maintenance gets more efficient (increasing p). This figure clearly shows how preventive and corrective maintenance efficiency extents affect simultaneously the system optimal availability.For the same system and for p=0.7, Figure 7 displays optimal strategies in terms of the repairs efficiency factor a for different ratios Tc/Tp. We notice that for any given repair efficiency degree a, as the ratio Tc/Tp increases, the optimal period of preventive maintenance T* and the corresponding optimal stationary availability decrease.\n5. Conclusion: In this work, we studied a preventive maintenance policy for randomly failing systems evolving in a context where preventive and corrective maintenance actions are imperfect. The imperfection of repair actions has been modelled by a decreasing quasi-renewal process based on a deterministic repair efficiency factor a. Preventive maintenance actions follow a (p, q) rule. We modelled the system stationary availability and studied its evolution under the proposed maintenance policy. It has been proven that for any given situation regarding the system, the repair and preventive maintenance efficiency extents, and the downtime durations for preventive and corrective maintenance, there is necessarily a finite optimal period T* of preventive maintenance which maximises the system stationary availability. A sufficient condition for the uniqueness of the optimal solution has been derived. This condition is that the quasi-renewal density (quasi-renewal rate) should be increasing with time.The expression of the system stationary availability uses the quasi-renewal function that is impossible to express in a closed form. We proposed a generalised formulation of the quasi-renewal process considering non-negligible repair durations, and a numerical algorithm allowing computing the quasi-renewal function for any quasi-renewal process with systems whose times to first failure follow any given distribution. The quasi-renewal function being a key quantity for a great number of imperfect maintenance models based on the quasi-renewal process, the developed computational procedure can be very useful for such models.Obtained numerical results, corresponding to a set of particular cases, illustrated how preventive and corrective maintenance efficiency extents affect simultaneously the system optimal availability. It allowed concluding that as preventive maintenance gets more efficient, it is possible to tolerate a certain degree of ineffectiveness of the corrective actions without having an important availability loss. The knowledge of the interval of tolerance of the repairs efficiency degree not causing significant availability loss represents an important source of flexibility for the maintenance management.Several extensions of this work considering more complex imperfect maintenance policies are currently under consideration.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to study the evolution of a system stationary availability and determine the optimal preventive maintenance period, which maximises it in a context where preventive and corrective maintenance actions are imperfect and have non-negligible durations.\nDesign/methodology/approach: - The quasi-renewal process approach and a (p, q) rule are respectively used to model corrective and preventive maintenance. Considering the durations of the preventive and corrective maintenance actions as well as their respective efficiency extents, a mathematical model and a numerical algorithm are developed in order to compute the system stationary availability.\nFindings: - It has been proven that for any given situation regarding the system, the repair and preventive maintenance efficiency extents, and the downtime durations for preventive and corrective maintenance, there is necessarily a finite optimal period T* of preventive maintenance which maximises the system stationary availability. A sufficient condition for the uniqueness of the optimal solution has also been derived. Numerical examples illustrated how preventive and corrective maintenance efficiency extents affect simultaneously the system optimal availability.\nPractical implications: - The study considers a general industrial framework where preventive and corrective maintenance actions are imperfect. In fact, neither the best-qualified technicians nor the most suitable tools or spare parts are found to carry out maintenance actions. In such a context for a large variety of technical systems, when implementing preventive maintenance policies one should take into account the efficiency extents of maintenance actions as well as their durations in order to evaluate and optimise the system availability. The paper provides maintenance managers with a decision model allowing not only the computation and optimisation of system availability, but also the investigation of how preventive and corrective maintenance efficiency extents affect simultaneously the system optimal availability. The proposed model also allows one to find to what extent corrective actions ineffectiveness should be tolerated without having an important availability loss.\nOriginality/value: - The paper proposes a modified formulation of the quasi-renewal process taking into account the non-negligible duration of corrective maintenance actions and periodic preventive maintenance. A new numerical algorithm is also developed in this context to compute the quasi-renewal function that it is impossible to find in closed form. This allowed the computation and optimisation of system stationary availability.\n", "role": "assistant" } ]
How start-ups successfully organize and manage open innovation with large companies
[ "Open innovation", "Entrepreneurship", "SMEs", "Start-ups", "CVC" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Open innovation is becoming a widespread practice in companies and developed into a prominent research topic in the current innovation management literature (Chesbrough et al., 2006, 2014). Despite increased interest of academicians and practitioners, open innovation research has focused mainly on large companies while small and medium enterprises (SMEs) have received in comparison little attention (Chesbrough and Crowther, 2006; Van de Vrande et al., 2009; Lee et al., 2010; Ahn et al., 2015; Eftekhari and Bogers, 2015). The existing literature on large companies highlights that open innovation positively influences the firm's performance (Dodgson et al., 2006; Gassmann et al., 2010) while scant literature on SMEs suggest that small firms may even benefit to a greater extend from open innovation activities (Van de Vrande et al., 2009; Spithoven et al., 2013; Brunswicker and Vanhaverbeke, 2015).\n2. Start-ups and open innovation: In spite of the burgeoning literature on open innovation, the studies focusing on start-ups are still in an embryonic phase. By nature start-ups rely more on external partners (Ceci and Iubatti, 2012; Henton and Held, 2013) but there is still no explanation on which mechanisms start-ups are using to organize and manage collaborations successfully with external partners. The start-ups are different from SMEs as they are also bounded by the liability of newness. However, they are similar to SMEs they both face scarcity of resources usually labeled as liability of smallness. Start-ups also share many commonalities with SMEs in terms of strength and challenges but their complexities are exacerbated due to their liability of newness. We will also look at existing literature on open innovation in SMEs in order to differentiate between open innovation in SMEs and start-ups.\n3. Methodology: As discussed in the previous section, there is a lack of understanding how start-ups organize and manage open innovation with large companies. In this study we use multiple case studies to strengthen theory building and explain the collaboration between large and start-ups seen from the perspective of the start-up manger (Eisenhardt, 1989). The exploratory case study method is used to get in-depth insights into prevailing practices in these collaborations (Stake, 1995; Yin, 2003). A mix of both primary and secondary data were used to construct the cases. The cases were built on (Vanhaverbeke et al., 2012) extensive fieldwork to understand the nature of open innovation in start-ups. We identified two cases where start-ups were involved in open innovation activities with the large companies. The cases were selected to analyze start-ups' perspective in both inbound and outbound open innovation. Additional interviews were conducted with the startup managers and other prominent members of their innovation network. This allowed us to assess the open innovation activities on a longitudinal basis as well as to get updated information on the mechanisms and processes used. The longitudinal case studies helped us in describing the open innovation context in greater detail. Several secondary sources such as firms' websites, newsletters and other material were used for further detailed information.\n4. Case studies: In this section we illustrate via two extensive case studies how start-ups can effectively innovate in collaboration with a large company. First we examine inbound open innovation: the case study of Isobionics illustrates how a start-up develops and commercializes abandoned technology developed by a large company (see arrow on top of Figure 1). Second, the case study of Philips' Airfryer illustrates how a start-up can be a valuable source of innovation for the large company and can gain significantly by entering into a collaborative innovation agreement with its large counterpart (see arrow at the bottom of Figure 1).\n5. Discussions and conclusion: We presented two cases of successful open innovation practices between a start-up and a large company. There is limited prior research available on such collaborations from the perspective of a start-up and the paper contributes to the open innovation literature by providing an understanding of the inbound and outbound open innovation practices from the perspective of a start-up. Prior research points to the fact the start-ups can greatly benefit from open innovation as a way to overcome their liability of newness and smallness. In this study we explain the mechanisms through which start-ups can reap the benefits of open innovation practices when they collaborate with large firms. We illustrate how they can take on challenges and can leverage skills and assets of the large firm to overcome these challenges. Our work is also different from prior research as the two in-depth case studies provides a detailed understanding of key challenges, barriers, advantages and processes of both forms of open innovation. We find for instance that the acquaintance of the start-up entrepreneur with management in large firms is important to understand the success in collaboration with large firms. Next, collaboration between start-ups and large firms need to be considered as a dynamic relationship since the relationship itself is part of a broader picture in which partners' needs, strategic positions and further network formation changes over time. When start-ups are technology recipients, success is not only determined by the relationship with the larger firm but also by a broader strategy based on open innovation. Start-ups that are technology providers should be skillful in negotiating a deal with their counterparts, as there is a good chance that the latter will no longer need the former's technology once they master the technology internally.\n", "role": "user" }, { "content": "Purpose: Open innovation in start-ups is a relatively unexplored field and studies focusing on collaborative innovation between start-ups with large companies seen from the former's point of view are virtually inexistent. The authors address this gap in an exploratory study built on in-depth case studies. The purpose of this paper is to illustrate how start-ups successfully organize and manage open innovation with large companies. The paper highlights common challenges and barriers faced by start-ups in adopting open innovation practices along with its benefits for them.\nDesign/methodology/approach: This is an exploratory study based on two case studies. The cases are diligently selected to examine two key forms of open innovation - inbound and outbound open innovation - in start-ups.\nFindings: The paper provides an insight on how start-ups organize and manage open innovation activities with large companies and how it benefits them in overcoming liability of newness and smallness. The practices significantly differ from those followed in large companies. The paper highlights the advantages and challenges of inbound and outbound open innovation for start-ups. This paper also ascertains the crucial role of start-up manager for successful implementation of open innovation and shows how start-up's managers with prior experience of working in/with a large company can proficiently deal with the larger counterpart in the innovation network.\nResearch limitations/implications: This research is based on exploratory case studies so the conclusions drawn from these two cases may be hard to generalize. The findings of the study could be used for further development of the theoretical framework. Future research, including quantitative studies, will be helpful in examining the conclusions and providing more in-depth understanding of open innovation in start-ups.\nPractical implications: The paper includes several practical implications for the managers including the role start-up managers play in organizing and managing open innovation activities. Furthermore, this paper suggests how start-ups could orchestrate open innovation ecosystem.\nOriginality/value: The paper is a step forward in filling the literature gap about open innovation and start-ups with some definite implications for start-up managers. A lot is written about the collaboration between large firms and start-ups from a former's point of view but the start-up's perspective has been left unexplored.\n", "role": "assistant" } ]
Incremental and radical open service innovation
[ "Service innovation", "Open innovation", "Radical innovation", "Incremental innovation", "New service development", "Case study method" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: New service development (NSD) is a key process in service firms (Edvardsson et al., 2013; Johne and Storey, 1998). It is often based on co-creating new knowledge with customers through involving them in the NSD process (Magnusson et al., 2003; Melton and Hartline, 2015). Especially in the business-to-business (B2B) market, research suggests involving other actors such as suppliers, partners and even competitors (Syson and Perks, 2004; Lusch and Nambisan, 2015). Beyond existing models based on high-profile cases such as Google and IBM, there is scarce knowledge on how to organize open service innovation. Mina et al. (2014) argue that open innovation for services is different from products, but research provides no knowledge on taking this difference into account to organize open service innovation, which is especially relevant for industrial firms in a B2B market using open innovation for both products and services (Visnjic et al., 2016).\nTheoretical background: Service innovation\nMethod: A nested case study\nFindings: The innovator firm organizes the open service innovation groups, turning their ideas and concepts into commercial services. Each of the open service innovation groups is organized as a team of participants facilitated by a project manager from the innovator firm.\nDiscussion and implications: In the following section, we elaborate on the similarities and differences between the identified archetypes for open service innovation. In particular, we discuss how open service innovation can be used for both incremental and radical service innovation. We end by summarizing the theoretical and managerial implications, as well as limitations and suggestions for further research.\n", "role": "user" }, { "content": "Purpose: Open service innovation is an emergent new service development practice, where knowledge on how to organize development work is scarce. The purpose of the present research is to identify and describe relevant archetypes of open service innovation. The study views an archetype as an organizing template that includes the competence of participants, organizing co-creation among participants and ties between participants. In particular, the study's interest lies in how open service innovation archetypes are used for incremental and radical service innovation.\nDesign/methodology/approach: For the research, a nested case study was performed, in which an industrial firm with nine open service innovation groups was identified. Forty-five interviews were conducted with participants. For each case, first a within-case analysis was performed, and how to perform open service innovation in practice was described. Then, a cross-case analysis identifying similarities and differences between the open service innovation groups was performed. On the basis of the cross-case analysis, three archetypes for open service innovation were identified.\nFindings: The nested case study identified three archetypes for open service innovation: internal group development, satellite team development and rocket team development. This study shows that different archetypes are used for incremental and radical service innovation and that a firm can have multiple open service innovation groups using different archetypes.\nPractical implications: This study provides suggestions on how firms can organize for open service innovation. The identified archetypes can guide managers to set up, develop or be part of open service innovation groups.\nOriginality/value: This paper uses open service innovation as a mid-range theory to extend existing research on new service development in networks or service ecosystems. In particular, it shows how open service innovation can be organized to develop both incremental and radical service innovations.\n", "role": "assistant" } ]
Information science: an interdisciplinary effort?
[ "Information science", "Integration" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Information science has incorporated a number of distinct disciplines into its mainstream and peripheral research, including library science, information technology, sociology, communications, computer science, AI. Collaboration is being promoted within many disciplines as a source of innovation and theory progression, including information science (Borgman, 2004; Hara et al., 2003). Given that information science has multiple disciplines at work within it, this paper seeks to investigate the nature and value of collaboration. A major component of this investigation is clarifying semantic meanings for terms that are frequently used interchangeably by researchers and practitioners with little or no qualification or clarification: intradiciplinary, crossdisciplinary, multidisciplinary, transdisciplinary and interdisciplinary (Stember, 1998, p. 341). I will focus on the terms \"interdisciplinary\" and \"multidisciplinary\" because they are the most prominent and frequently used, and in following sections I will argue that the other terms fall roughly into categories defined by these two. The investigation will be guided by two main questions:1. What philosophic distinction can be made between inter- and multi-disciplinary efforts2. Is one more valuable than the other to information science research?Why are these questions fruitful or have benefit to information science? Some would argue that imprecise meanings surrounding terms are not detrimental to their use and effectiveness in academic arguments and professional practice (Power, 2003, p. xii). In contrast to this view, I want to suggest that a fuller understanding of these two terms provides three main benefits to information science. First, well-defined terms provide a basis for collective understandings and consistent term use. I am not suggesting that the terms can only have a single meaning that each employer of the term must adhere to. Rather, a well-developed understanding of a term allows for the possibility of more consistent use, but also for challenges and criticisms to be built toward the accepted meaning of the term. Second, a precise understanding of a term allows for \"deeper research\" by employing terms at steps down from the broadest level of meaning. Research at a deeper level can be achieved through an ability to extrapolate from one research project to new research through a precise understanding of the conclusions presented. For example, if a research paper on multidisciplinarity is presented other researchers interested in the topic are able to take account of the findings and incorporate or extend the research much more effectively on the basis of shared meaning. There is less possibility of meaning being lost in translation between projects. Third, it is widely argued that precise meanings of words, or at least frameworks of meanings, allow for much more effective communication (Porksen, 1995). Thus, communication between academics, students and practitioners in information science can be better achieved.The motivation inherent in each of these benefits is that the larger issue is much more than just a semantic confusion. How terms are used has practical implications for information science through the implementation of theory into real world library and information contexts. The arguments in this paper are motivated from the key issue of information science researchers engaging with discussion of confused and ineffectual terminology. This key issue is not simply semantic because the emphasis is on gaining a deeper understanding of the nature of interdisciplinary and multidisciplinary research within information science, and in turn the influence this has on information science practice.The following sections will develop the meanings of interdisciplinary and multidisciplinary research through both definitions and examples. It is useful and important to provide a most basic account of each so that the discussion in section 1.2 can take place with some boundaries around the terms, as well as provide some basis for the full-scale development of the terms in section 2.1.1. Definitional bases\n2. Conceptions of interdisciplinarity and multidisciplinarity: Anthropologist Clifford Geertz suggests that we are living in a world of \"blurred genres\" (Geertz, 1983, p. 20) and his sentiments are echoed in information science (Dogan, 1996, p. 297; Rogers, 2004, p. 88). Geertz thinks the blurring of genres is highly relevant to interdisciplinary study, but before investigating such blurring I first want to deal with an often unspoken and assumed component of collaboration between disciplines: a discipline. I want to begin an investigation of i/d and m/d research by way of disciplinary investigation. The concept of multiple disciplines collaborating in some way assumes an understanding of what disciplines are. While a full account of the development of disciplines is impossible due to size restraints, a basic account can be offered which avoids blanket assumptions and proves fruitful for understanding i/d and m/d.2.1. Assumptions of disciplinarity\n3. Case study: information science and cognitive science: Possible collaborative efforts between IS and a variety of disciplines would prove useful to explore. For example, philosophy can provide numerous relevant arguments concerned with the nature of knowledge that could deeply integrate into IS. Likewise sociology presents theories of considerable interest to IS research that delves into group dynamics and cultural trends for information use. Cognitive science is another discipline that has much to offer based on its strong collaborative nature.IS has involved cognitive science in research and theories in varying degrees for a significant period of its history. Birger Hjorland provides an account of the history of cognitive science in IS and finds strong evidence of collaboration at least from 1977 onwards (Hjorland, 2002, p. 257). While it can be argued that Hjorland has provided the most sustained research into the value of cognitive science for IS, many other researches within IS have also identified or undertaken valuable collaboration between IS and cognitive science (Allen, 1991; Belkin, 1990; de May, 1977; Newby, 2001; Rogers, 2004). It is important to not confuse these researchers as actual cognitive scientists; rather their significance grows from their intentional acknowledgement of cognitive science as a contributing factor to IS. While some of them delve into cognitive theory, it is only as related to IS theory and practice.Cognitive science is an interesting discipline to discuss in the context of i/d and m/d considerations in IS because cognitive science is deliberately collaborative. Cognitive science explicitly incorporates a variety of disciplines because it intentionally approaches the topic of cognition as boundary spanning and collaborative (Lepore and Pylyshyn, 1999, p. viii; Von Eckardt, 2001, p. 454). The following is a description of cognitive science from two elder statesmen in the cognitive science faculty of Rutgers:The pursuit [understanding cognition] is essentially multidisciplinary and involves techniques and knowledge drawn from experimental psychology, computer science, neuroscience, philosophy, linguistics, mathematics and engineering. The multidisciplinary nature of the research goes much deeper in cognitive science than in many other interdisciplinary fields (Lepore and Pylyshyn, 1999, p. vii).Lepore and Pylyshyn identify the intentional diversity of disciplines in cognitive science, while also demonstrating the inadvertent interchange of \"multidisciplinary\" and \"interdisciplinary\" by employing them in the same sentence. Leaving aside the implicit question asked by Lepore and Pylyshyn of what type of collaboration is predominant in cognitive science (see Rogers et al., 2003, pp. 16-18) it proves a very beneficial case-study to analyze the existing and potential collaboration between IS and cognitive science.3.1. Collaboration between cognitive science and IS\n4. Conclusion: The initial questions asked by this paper were:1. What philosophic distinction can be made between inter- and multi- disciplinary efforts?2. Is one more valuable than the other to information science research?Philosophically there are significantly distinct concepts residing in each term, which I have argued boil down to integration and proximity. These distinct features allow for the terms to signify very specific forms of collaboration within IS. While it is naive and confused to envisage universal acceptance of the usage and meaning of terms, more modest ambitions have valuable benefits of IS. Utilizing these terms consistently introduces integrity within research that allows conclusions to be extrapolated and research programs to be more effectively identified and understood. While universal understandings of i/d and m/d are not intended from the arguments here, the basic challenge to discuss and make explicit meanings of these terms when used is put to IS research.The second question can attract a number of answers. To summarize the answer suggested here I return to an Yvonne Rogers passage:There is a widespread view that interdisciplinary research is a good thing...However, it is also widely believed that \"true\" interdisciplinarity is very difficult to achieve and, more often than not, remains an elusive goal. In practice, many self-styled interdisciplinary enterprises actually work at the level of being multidisciplinary: where a group of researchers from different disciplines collaborate by working together on the same problem towards a common goal, but continue to do so using theories, tools, and methods from their own discipline. (Rogers et al., 2003, p. 3).Integrating disciplinary theory and method is mutually challenging and difficult for the disciplines involved. By default, as Rogers et al. (2003) argue, most collaboration ends as m/d. This paper identifies m/d efforts as extremely beneficial in some contexts; therefore it has an important role to play. By the same token, i/d efforts are most appropriate and beneficial in other contexts. I have argued that the integration of other disciplinary theory and methods provide challenges and opportunities that both tie IS to developing and promising areas of research in other disciplines and help propel IS forward methodologically. In this way i/d efforts, though more challenging[5], promise long term and further reaching benefits.The pursuit of collaboration between IS and various disciplines, including cognitive science, is valuable and will continue to attract attention because of the advantages it offers. The direction such collaboration takes IS will be partially determined by the influence other disciplines have on IS[6]. It is the argument in this paper that to be of most benefit to IS research types of collaboration should be made explicit, and the most applicable form utilized. The motivation for this argument is not simply semantic though, the conclusion is directed at researchers who engage with deeper meanings of these terms and developing much fuller understandings of what i/d and m/d allow and create for IS research. Through deeper understanding in research the ultimate implication is the application into real world IS environments[7].\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to investigate the nature of collaboration within information science, to argue for distinct meanings for interdisciplinary and multidisciplinary research and to answer two questions: What philosophic distinction can be made between inter- and multi-disciplinary efforts? Is one more valuable than the other to information science research?\nDesign/methodology/approach: - The semantic inconsistency for describing collaboration is identified in information science. Philosophic analysis of the terms interdisciplinary and multidisciplinary is conducted, arguing for distinct meanings. A case study through cognitive science exploring the philosophic meanings is developed.\nFindings: - Interdisciplinarity and multidisciplinarity are very distinct in terms of methods and frameworks. Interdisciplinary research has longer-term benefits for information science, but multidisciplinary research can also fulfil valid collaborative roles in information science.\nResearch limitations/implications: - A brief technical discussion of cognitive science theory lends itself to those interested in cognitive science. The paper is philosophic in nature.\nPractical implications: - The inconsistencies surrounding the terms interdisciplinary and multidisciplinary have impact on professional practice. The argument for more consistent terminology will introduce more semantic integrity and reduce miscommunication. There is a challenge to engage in more interdisciplinary research and practice within information science.\nOriginality/value: - Collaboration is a common term used, but little research into what it precisely means within information science has been conducted. The paper argues for a clear distinction between various types of collaboration, and provides a case study with cognitive science.\n", "role": "assistant" } ]
How does negative experience sharing influence happiness in online brand community? A dual-path model
[ "Happiness", "Sharing", "Online brand community", "Social support", "Social exclusion", "Negative review" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Brand community is a collection of social relationships (Muniz and O'Guinn, 2001). Recently brand community researchers explain the associations among consumers, community and brand with a handful of sociology theories such as social capital, social network and sharing (Habibi et al., 2014; Zaglia, 2013; Zhang et al., 2016). A major form of online brand community (OBC) activities is sharing. Brand community provides a huge platform for consumers to share product experience, expanding the scope of word-of-mouth (WOM) communication and making a brand well-known (Jang et al., 2008; VanMeter et al., 2018). Previous research has found that sharing plays an important role in OBC in a way that sharing positive brand experience strengthens the consumer-brand association, while sharing negative brand experience weakens it (Trusov et al., 2009).\n2. Background: 2.1 Sharing in online brand communities\n3. Conceptual framework and hypotheses: As indicated by the contradictory findings in literature, negative sharing may produce both negative (e.g. unfavorable attitude) and positive (e.g. increased publicity) outcomes. The types of reviews and reactions can therefore be sorted in different corners of a 2-by-2 matrix (see Table II). Increased publicity increases the chance of eliciting negative or positive reactions among viewers. For example, online community members after viewing negative reviews may choose to either support or exclude the sharers of these reviews (Ilhan et al., 2018). While most of negative-review studies focus on negative outcomes such as decreased sales and product rating, few have investigated positive and negative reactions with a holistic approach.\n4. Study 1: An online survey was conducted targeting OBC users through Sojump.com (now www.wjx.cn), a popular online survey platform which maintains connections with many OBCs in China. A total of 1,035 questionnaires were collected. In total, 20 invalid responses were eliminated. This resulted in a final sample of 1,015 responses (response rate 98.07 percent). Valid respondents are members of various OBCs from different industries such as Huawei's club.huawei.com, MI's bbs.xiaomi.cn and Lenovo's club.lenovo.com.cn.\n5. Study 2: The first aim of Study 2 was to examine whether the hypothesized relationship between NBES and online social support/exclusion would be moderated by membership duration - to re-examine H5. The second aim of this study was to extend the findings from a viewer perspective by investigating how the negative review from the sharer was perceived by other members. A field experiment was conducted in cooperation with the online survey site (www.wjx.cn/). Experiment participants were randomly selected from MI's official OBC (http://bbs.xiaomi.cn/).\n6. Discussion and conclusion: The internet is filled with various positive and negative brand experiences, creating positive or negative WOM. Empirical studies have found that NBES weakens the relationship between the sharer and the brand. Does this hold for online brand communities? The findings of this study indicate that NBES in online brand communities does not necessarily result in unhappiness perceived by the sharer. The resulting outcome is determined by the game of two opposite paths: NBES leads to both social support and exclusion, which may enhance or weaken OBC happiness, respectively.\n7. Limitations and future directions: In our study, the concept and the measurements are directly taken from the literature. A detailed discussion of external validity in different contexts is lacking. We adopt a straightforward typical measure of OBC happiness. Future research may examine the impact of social exclusion and support on more dimensions of happiness such as general life satisfaction and positive vs negative affection.\n", "role": "user" }, { "content": "Purpose: Consumers share negative brand experience in many occasions to vent their emotion and seek support. The purpose of this paper is to investigate the impact of negative sharing on members' happiness in online brand communities by drawing from two opposing constructs: social support and social exclusion.\nDesign/methodology/approach: Both survey and experiment methods were employed to test the conceptual model. Online survey data were collected from 1,015 mobile internet users.\nFindings: The findings reveal that negative sharing may enhance a sharer's happiness through online social support particularly for novice community members. The findings also indicate greater online social exclusion for experienced members than for novice members. These findings cast doubt on the widely held assumption that increased engagement in a community will always produces positive outcomes. The moderating effect of membership duration is confirmed with an experiment of MI's brand community members.\nResearch limitations/implications: The study of happiness in online brand community sheds new light on consumer-brand and user-community relationships.\nOriginality/value: While most previous studies on negative sharing only explored the negative side of consequences, the authors contribute to this line of research by introducing both positive (social support) and negative (social exclusion) outcomes of negative reviews. The model also explains the conditions under which negative reviews enhance social support and social exclusion.\n", "role": "assistant" } ]
Museum object as document: Using Buckland's information concepts to understand museum experiences
[ "Michael Buckland", "Museum objects", "Document", "Semiotics", "Reader response", "Museums", "Document handling" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: \"Information\" as a theoretical object is in an unenviable position. It must somehow embrace information as a material object, as an individual cognitive effect, and as a social institution (Raber and Budd, 2003, p. 521).Generally speaking, museum objects are in the midst of an identity crisis. Every field or discipline that speaks about objects supplies a different explanation of their being/purpose/definition/meaning (Wood and Latham, 2009). For example, archaeology and history both refer to \"material culture studies\" as important subject matter of study, yet neither appears to recognize each other's work. This fractured way of understanding objects is not helpful to those interested in focusing on the object as a unit of study. In a sense, the physical \"object\" shares a similar ambiguity and diversity of definitions in museum studies as the term \"information\" does in library and information science (LIS). Common treatments of the term \"information\" in different fields of study often appear to have no association with each other or are so dissimilar to render comparison useless (Huang, 2006; Capurro and Hjorland, 2002; Machlup and Mansfield, 1983). This is the case with the physical object as well.In this article, a bridge is built between a semiotic understanding of Buckland's information typology to museums and their objects and finally to the concept of user transaction with documents in museums. These connections are made as a way of explaining passive lived experience as a form of information acquisition, particularly the lived experience with physical things. The first section of the article reviews Buckland's information concepts in order to set up the connection to museum objects. Because his terminology is extensive, the review here is limited to those concepts valuable in understanding the museum context. Following this, museum object as \"document\" using Buckland's typology is explained as a Peircean semiotic, which provides a potential link between Buckland's information typology and museum visitor experience with objects. Finally, Louise Rosenblatt's transactive model of reader response is introduced to make the ultimate connection between Buckland's typology, museum object as document, and museum user lived experience. This work is directed at two audiences:1. information researchers who may be familiar with Buckland's work but are not familiar with the role of museum objects in information science; and2. museum studies researchers as an introduction to a new conceptual lens for understanding museum objects and museum visitor experiences.\nRevisiting \"information-as-thing\": Michael Buckland is well known for his somewhat controversial 1991 article entitled \"Information as thing\" (Buckland, 1991b), which is actually a chapter from his book of the same year, Information and Information Systems (Buckland, 1991a). Information-as-thing is regarded as a special interest by Buckland within the context of information systems, retrieval and storage, since he believes we can only directly deal with information in this sense. In this section, I will go through the main concepts surrounding his explanation of information, holistically and specifically. Table I provides a summary of many terms defined by Buckland in the 1991 article. Definitions are directly drawn from Buckland's descriptions.Before proceeding further, it is necessary to point out that Buckland's concept of information-as-thing has often been misunderstood over the years since it was published, with many readers interpreting the \"thing\" portion too literally or out of context. Over the years, Buckland's ideas became clearer with his work on the notion of \"document\" which has aided in a better understanding of \"information-as-thing.\" His intent was not to claim that all information is \"thing\" but to point out that much of what information science deals with (especially in information retrieval and storage) is indeed information-as-thing - or representations of knowledge. Buckland's concern was to \"develop and present an overall conceptual framework for considering information systems\" (Buckland, 1991a, p. 31). He felt that inclusion of the proper diversity of systems was more important than to draw boundaries between them (Buckland, 1991a).At its simplest, Buckland explains that information is used to denote things regarded as informative - he asks, \"What things are informative?\" in order to find out what makes up \"information\". This encompasses much of the physical world and the breadth of this definition makes some authors uncomfortable (e.g. Yerkey, 1991). But, as Buckland (1991d) says, we cannot force something to be what it is not. This view of information is admittedly broad, so Buckland goes on to express what he calls the \"topography\" for information science. He presents three meanings, or uses, of the term information:1. information-as-process;2. information-as-knowledge; and3. information-as-thing.There is no one correct meaning, and no clear boundaries define each of these groupings. They are simply ways in which he sees \"information\" perceived in information science. Below is a summary of Buckland's definitions of all three terms:1. information-as-process - the act of informing;2. information-as-knowledge - that which is perceived from the process; intangible; personal, subjective, conceptual; and3. information-as-thing - objects that can be informative; tangible; something expressed, described or represented in some physical way (as a signal).In fact, he says all three are part of the picture. His scenario is very cyclical and dynamic. In Figure 1 I have attempted to give a visual interpretation to Buckland's description of these concepts. The types of information are in no particular order as it is in reality a constant feedback loop. In addition, the location of the represented images (the \"things\": film, artifact, book) does not play any role in the process; their placement in the graphic is for convenience purposes only. Starting with the \"thing\", we will assume it is already out there in the world and not be concerned yet from where it came. The person who perceives the \"thing\" goes through a process (a thought, comparison, fitting) from the \"thing\" and what it represented in the mind[1] of the individual who processes it as a unique person (with his/her own construction of meaning). Then he/she (may) retain the information in his/her memory as knowledge. Once this person has gone through this cycle, s/he may begin it over again by expressing his/her knowledge, creating a \"thing\" - a representation - which can then later be used by another individual who will go through the same process.This \"thing\" is only a representation, not in itself knowledge, which Buckland indicates can only be in the mind of an individual. In this sense, systems cannot be knowledge, only representations of knowledge. Therefore, representations of knowledge and events must be information-as-thing, whether in a library, a museum or with computer-based systems. Buckland's examples of organizations that use information-as-thing (Buckland, 1991b) are:* libraries deal with books;* computer-based information systems handle data in the form of physical bits and bytes; and* museums deal directly with objects.As Buckland (1991b) describes:The intention may be that users will become informed (information-as-process) and that there will be an imparting of knowledge (information-as-knowledge). But the means provided, what is handled and operated upon, what is stored and retrieved, is physical information (information-as-thing) (p. 352).Buckland's focus on information-as-thing is simply an emphasis on his part and not a statement of value over information-as-process or information-as-knowledge. Information-as-thing includes data, text, documents, objects, and events (Buckland, 1991b). It includes, but extends beyond, communication. To Buckland, whatever information storage and retrieval systems store and retrieve is necessarily \"information-as-thing\", as these systems can only deal directly with \"information\" in this sense (Buckland, 1991a). He later refers to these \"things\" as \"documents\", or any organized physical evidence (Buckland, 2000).For the sake of argument, Buckland then simplifies the world into four aspects of information using Table II. In a confusing turn, Buckland introduces what he calls, a \"fourth element\" into the whole scenario - information processing, which refers to the manipulation, handling, and derivation of new forms (or versions) of information-as-thing. The separating out of this element is somewhat artificial since it is a form of information-as-process, but Buckland chooses to delineate mental processing from manual processing, where information processing is a \"manual\" undertaking. Although this is a somewhat false dichotomy, one can understand his reasoning when looking at Table II, which provides the \"two distinctions\" of information and its systems:1. between entities and processes; and2. between tangibles and intangibles (Buckland, 1991b).Perhaps a better representation of these concepts would have been as a graphic, revealing the cyclical, active nature of how information behaves and manifests itself. In his description, Buckland describes his concepts in a very dynamic way, but then represents them in the end as the opposite of what he was trying to explain.In later works, Buckland came to understand that because the term \"information\" has multiple meanings, it is not useful as the central concept for LIS. A more appropriate term is \"document\" (information-as-thing; object-as-sign). This concept will be discussed below in greater detail.\nBuckland's information concepts in detail: In his larger work, Information and Information Systems (Buckland, 1991a), published around the same time as the information-as-thing article, Buckland explicates some of his conceptual ideas about information in more detail. This, along with more developed later works, gives us a deeper understanding of his ideas on information and information systems. This section will detail a number of the complex ideas Buckland has raised in explicating his typology so that we may better understand their application in the museum context. The terms discussed below are:* evidence;* species of information;* types and tokens;* systems, situation and consensus;* processes involved in provision and use of information; and* documents/documentation.Evidence\nBuckland's information concepts and museum objects: The Forms that documentary work assumes are as numerous as the needs from which they are born (Briet, 2006).Museums as information systems; museum objects as documents\nBuckland's information concepts and museum object experience: The link must now be made between the museum object as a document and museum user experience. In order to do this, the scenario is explained in the context of semiotics to understand the relationship between person, museum object and meaning in a holistic, situational perspective. Even Buckland points out that his concepts lend themselves to the semiotic (Buckland and Day, 1997), although he has never fully explicated the connection. In order to make this case, I will first explain Buckland's concepts in relation to the semiotic triangle and the sign, and relate semiotics to LIS as a whole. Following this, I will illustrate the museum object as a sign (a document) by using museum theorist Edwina Taborsky's rendition of the semiotic triangle to place the museum squarely in the picture, and finally fit that into my semiotic conception of Buckland's information components and processes. Once this thread is drawn, it will be connected to a model borrowed from reader response theory (also a semiotic system) which will help merge Buckland's information concepts with user experience with museum objects.The semiotics of Buckland's information typology\nConclusion: The original intent of this article was to link three areas of study:1. Buckland's information typology;2. museum object study; and3. an understanding of museum visitor experience.Using Buckland's conceptual framework, especially that of the \"document\" and an emphasis on the semiotic character of other elements of his typology, I have attempted to connect these areas into a complex, but meaningful perspective. Discussion in this paper starts with Buckland's concepts, then makes links to the semiotics of museum objects, and finally to museum user experience. Ultimately, these connections are about the coming together of document, person and meaning - and an aspect of information behavior in museums. This perspective is equally useful to both museum studies and information science.Buckland's typology is important to an understanding of the museum system and museum object in both LIS and museum studies. The concept of \"document\" opens up a broad perspective, which creates rather than limits understandings of the human relationship with information. This broad concept of \"document\" as sign/semiotic helps us understand user experience in ways not previously explored in the convergence of museums and information studies. In addition, using these concepts can help museum investigators \"work backwards\" from visitor experience to understand the museum professional's role in the entire process of informing the public - from selection, description, and creation of a set of signs that have the potential to inform visitors in a broad continuum of ways. Buckland helps us see the museum as an information system, the museum object as a document, and the multidimensional use of the concept information and its semiotic ramifications. Information, we know, is more than just text; it is a roadmap from external thing to internal experience. And in this sense, museum objects as documents can be understood as actors and players in the transaction between people and things.In the broadest sense, Buckland's concept of document as a unifying theoretical concept for museums, libraries, and archives should not be taken lightly. These institutions that share one major thing in common - as functioning cultural institutions charged with the storage, care, and accessibility to information users seek out and experience. Rather than each working independently, creating divergent theories, methods, standards, best practices, and concepts, we would be better off understanding our similarities - the document - and communicating and learning from each other.\n", "role": "user" }, { "content": "Purpose: - The purpose of this article is to understand the meaning of museum objects from an information perspective. Links are made from Buckland's conceptual information framework as a semiotic to museum object as \"document\" and finally to user experience of these museum \"documents\". The aim is to provide a new lens through which museum studies researchers can understand museum objects and for LIS researchers to accept museum objects as another form of document to be studied.\nDesign/methodology/approach: - A conceptual and comparative analysis of Buckland's information typology as a semiotic. Outcome of analysis forms a model of understanding the museum object as a \"document\" that is accessed by users on a continuum of experience.\nFindings: - Michael Buckland's information typology is insightful and useful for a broad understanding of what all heritage institutions have in common: the physical object. Buckland helps us see the museum as an information system, the museum object as a document, and the multidimensional use of the concept information and its semiotic ramifications.\nOriginality/value: - Buckland's typology is important to an understanding of the museum system and museum object in both LIS and museum studies. The concept of \"document\" opens up a broader perspective, which creates, rather than limits understandings of the human relationship with information. This expanded concept of \"document\" as sign/semiotic helps us understand user experience in ways not previously explored in the convergence of museums and information studies, from the practical to the theoretical. In this inclusive sense, Buckland's concept of document is a unifying theoretical concept for museums, libraries, and archives.\n", "role": "assistant" } ]
A survey of assessor beliefs and practices related to faking
[ "Personality", "Psychological tests", "Assessment", "Individual behaviour" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Individual assessment taps an individual's job-relevant knowledge, skills, abilities, and personality characteristics for the purpose of making a recommendation, or inference, about the individual's suitability for a job (Highhouse, 2002; Kwaske, 2004; Prien et al., 2003). Individual assessments also can be conducted for developmental purposes (Frisch, 1998); career planning, classification, and compensation (Klimoski and Zukin, 2003); and outplacement (Guion, 1998). Two main factors distinguish individual assessment from other selection procedures:1. these procedures are usually individually administered unlike assessment centers or other group testing; and2. clinical integration and synthesis of the individually administered battery of tools is used to arrive at an overall recommendation about a candidate (Kwaske, 2004).Many consulting firms continue to offer individual assessment services. However, the research base undergirding this practice is sorely lacking compared to other practice areas, such as traditional areas of personnel selection and training (Kwaske, 2004). Research on individual assessment has focused either on criterion-related validity (Albrecht et al., 1964; Dicken and Black, 1965; Gordon, 1967; Hilton et al., 1955; Meyer, 1956; Miner, 1970) or individual assessment practices (Ryan, 1992; Ryan and Sackett, 1987, 1989, 1992). The purpose of the current study is to extend past research on individual assessment validity and practices to a study of faking issues on personality inventories associated with individual assessments. We will first provide a brief discussion of the faking literature as it pertains to this study. We will then present the research questions that we are trying to address and the results of the current study.\nFaking on personality inventories: A number of labels have been applied to the phenomenon of individuals applying answers to personality questionnaires that are not commensurate with their actual answers including: impression management (IM), socially desirable responding, response style, response distortion, intentional distortion, and faking (Edens, 2005; Edens et al., 2001; Hakstian and Ng, 2005; Marshall et al., 2005; McFarland and Ryan, 2000). These distinctions in nomenclature have not been substantive - they all basically refer to the same substantive phenomenon. For the purposes of this paper, when we refer to the tendency of individuals to apply an answer to their personality questionnaire that is not commensurate with their actual answer in a positive direction, we will call this faking.The literature on faking on individual difference measures has a long, contradictory, and inconclusive history. What we do know about faking are two main things:1. people can fake on noncognitive measures when instructed to do so (Viswesvaran and Ones, 1999); and2. people vary in their ability to fake on noncognitive measures (McFarland and Ryan, 2000; Mersman and Shultz, 1998).However, the list of what we do not know is much longer.Some researchers have found significant mean differences between applicant and incumbents on noncognitive measures[1] (Bass, 1957; Kirchner et al., 1960; Michaelis and Eysenck, 1971; and Rosse et al., 1998); whereas, other researchers have found no evidence of such differences (Hough et al., 1990; McClelland and Rhodes, 1969; Orpen, 1971; and Schwab and Packard, 1973). Some researchers have found differences in factor structure (Schmit and Ryan, 1993) or item functioning (Stark et al., 2001) between applicant and nonapplicant populations; whereas, other researchers have found no evidence of such differences (Ellingson et al., 2001; Smith and Ellingson, 2002). Some researchers have found social desirability corrections to have no moderating impact on criterion-related validities of noncognitive measures (Ones et al., 1996); whereas, other researchers suggest that social desirability scales may be working but we may not be able to assesses their efficacy using correlational techniques (Christiansen et al., 1994; Rosse et al., 1998). Finally, efforts have been devoted to either reduce faking or the effects of faking such as:* developing forced-choice techniques (Christiansen et al., 2005; Jackson et al., 2000);* the use of empirically keyed instruments with subtle items (Holden and Jackson, 1981; Worthington and Schlottmann, 1986);* warnings to applicants not to fake (Dwight and Donovan, 2003; McFarland, 2003);* appropriateness measurement (Zickar and Drasgow, 1996);* response latencies (Holden et al., 2001; Vasilopoulos et al., 2005);* observer ratings of personality (Mount et al., 1994); and* indirect personality measurement (James et al., 2005).None of these efforts are proven to be particularly practical or effective across a variety of settings.We could locate no research that has specifically examined faking on personality inventories in the context of individual assessment. However, at least one study has examined faking (i.e. IM) use and effectiveness across assessment methods for an assessment center (McFarland et al., 2003). McFarland et al. (2003) found that:* IM was more prevalent in the situational interview than in the role play;* other-focused IM tactics (e.g. other enhancement or opinion conformity) were used more frequently than self-focused tactics (e.g. self-promotion, basking in reflected glory, overcoming obstacles) in both assessment methods; and* IM use in the situational interview predicted assessor ratings and final promotion scores, while IM use in the role-play did not.\nResearch questions: We designed our study to address the following four research questions based on our review of the individual assessment and faking literature and anecdotal reports and interviews with those who conduct individual assessments:RQ1. Do assessors perceive faking on a personality inventory as a problem in the individual assessment process?\nMethod: Participants and procedure\nResults: Most respondents agreed with the statement that \"Faking is a serious threat to the validity of the personality inventory in the assessment process\" (70.2 percent, M=3.75, SD = 1.22). Paradoxically, a majority of individuals also agree with the statement that \"Faking on the personality inventory does not heavily affect my overall assessment of the candidate\" (58.5 percent, M=3.34, SD = 1.19). Respondents reported an average of 23.6 percent (SD = 17.82) in response to the question \"What proportion of assessees (in percentage terms) do you think try to fake the personality inventory?\"Table I presents the mean rankings of assessor beliefs about factors that affect faking in the assessment process. \"Assessees' desire to present themselves in the best light\" was reported by respondents as the most likely reason for faking. \"Assessees do not take the assessment seriously\" was reported to be the least likely reason for faking.Table II presents the mean rankings of assessor beliefs about usefulness of indicators of faking. \"Extreme scores on several scales of the personality inventory that are inconsistent with all available evidence from other components in the assessment process\" was reported by respondents to the most useful indicator of faking. \"A high score on the trait-based IM scale of the personality inventory\" was reported by respondents to be the least useful indicator of faking.Table III presents the mean rankings of assessor practices to remediate faking. \"Weight the personality inventory less in scoring competencies\" was reported by respondents as the most typical way assessors use to deal with faking. \"Ask the assessee to retake the personality inventory\" was reported by respondents as the least typical way assessors use to deal with faking.\nDiscussion: The overall purpose of the current study was to examine assessor beliefs and behaviors with regards to faking in the individual assessment process. We administered a survey to experienced assessors at an international consulting firm to gather information on four research questions, discussed in turn.Do assessors perceive faking on a personality inventory as a problem in the individual assessment process?\n", "role": "user" }, { "content": "Purpose: - To gather information on assessor beliefs and behaviors in relation to assessee faking issues on a personality inventory in the individual assessment process.\nDesign/methodology/approach: - A survey approach was used in this research. Totally 77 experienced assessors who conducted individual assessments for an international consulting firm responded to the survey. Analyses of mean item rankings were used to answer several research questions.\nFindings: - Major results of the study were: assessors believe faking is a problem; assessors believe they can detect faking; and assessors believe they can effectively eliminate all of the effects of faking when evaluating the candidates.\nPractical implications: - The first implication from this research is that assessors believe that they can detect and deal with faking despite a paucity of evidence to support it. The second implication is that organizations may be reluctant to continue to develop effective methods of identifying and dealing with faking if their assessors mistakenly believe they are already successfully doing so.\nOriginality/value: - This study is the first to survey experience assessors regarding their beliefs and perceptions of faking issues in the individual assessment process and is designed to garner immediate practical insights and ideas for future testable hypotheses.\n", "role": "assistant" } ]
Major factors contributing to wind power diffusion
[ "Climate change", "Renewable energy", "Financial analysis", "Patent analysis", "Energy policies", "Wind power" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: The general development of renewable energies (REs), climate change and policy issues are important drivers for the diffusion of renewable and more sustainable technologies. There is a wide understanding that the climate is changing universally, mostly due to increasing greenhouse gases (GHG) in the atmosphere as a result of the increased use of fossil fuels. The surface temperature of the Earth could rise significantly during the next few decades, resulting in such impacts that we would witness an entirely new planet unknown to us (Lynas, 2008; Smil, 2010). However, improving energy efficiency, decreasing the use of fossil fuels and wide diffusion of various REs sources are among the most vital actions in limiting global warming to a sustainable level. The objective of this study is to analyse the technological, financial and political environment in the wind power industry and evaluate their interrelationships and contribution to RE diffusion. The study focuses mostly on European markets, firms and energy politics, keeping the global perspective in mind. The limitation to Europe is justified, as:\n2. Wind energy markets: 2.1 Wind power\n3. Research methods: The empirical study was executed using the case study research method. Case studies typically combine diverse data collection methods, such as interviews, questionnaires, observations and archives. Case studies may include both qualitative and quantitative data. Although case study is a distinctive research method, some scholars are concerned about the possible lack of scientific rigour that is related to an investigator's ability to execute solid case studies. Other limitations include the facts that the studies may provide too little basis for generalization and may be laborious to execute (Yin, 2009). In the present case, the research methods consisted of literature reviews, patent analysis, financial analysis and semi-structured interviews. The background information about the wind energy markets and important players was collected from consultancy reports (BTM Consult ApS, 2010a; 2010b). The six case companies selected for the study were Enercon (Germany), Siemens (Germany), Vestas (Denmark), Nordex (Germany), REpower (Germany) and Gamesa (Spain). The literature review and semi-structured interviews complemented the results of the patent and financial analysis.\n4. Empirical analysis: 4.1 Patent analysis\n5. Discussion: The increasing awareness of climate change and the scarcity of fossil fuel reserves on the horizon have made many nations consider energy alternatives. However, there exist contradictory opinions about the phenomena and solutions to solve the issues. The world's wind energy resources are significant, and technologies already exist to harness a considerable proportion of the primary RES. Wind appears practically everywhere, and the world's wind energy resources are extensive, providing opportunities to exploit both onshore and more recently, offshore wind power widely. However, wind power is still in the infancy of its life cycle and requires various support schemes to be competitive alternative energy sources in the world's energy mix.\n6. Conclusions: The study revealed that adequate energy support schemes and legislation, such as subsidies to RES and ETSs to fossil-fuels are crucial in wind power diffusion at present. The study showed that there exists a positive relationship between a firm's technological capabilities (e.g. strong IP) and good financial performance. However, because the wind power industry is still in the emerging phase in its life cycle, the firms need well-planned and efficiently implemented public support schemes to compete successfully in the energy markets. The wind power industry may be competitive without subsidies in 10-20 years, while the learning curve effects and scale-of-economics, for example, increase the competitiveness. There are also some limitations in our study. First, this study was mostly limited to European firms and took the European perspective in policy issues. Second, due to the time constraints and limited resources, the expert interviewees were from Finland. Third, we limited our analysis to the main support mechanisms and countries where the diffusion of RE has been rapid. Fourth, the number of firms and the financial data and the studied period were to some extent limited. To ensure the validity and reliability of the research, various data sources and data collection by other researchers were utilized, and so mainly data and investigator triangulation were utilized. A semi-structured questionnaire was designed and cross-checked by two researchers, and the interviewees represented an adequate group of top energy experts. In the future, the study could be augmented to cover a broader array of respondents, green technologies and countries in different continents. Quantitative data (Johnstone et al., 2010; Popp et al., 2011) could also be more widely included to verify the scientific generalization of the research results more rigorously. Regarding the financial data, the still continuing financial crisis in the world and especially in Europe may have influenced the results. All in all, it seems that support policy has been a crucial factor behind the industrial clusters in the selected countries.\n", "role": "user" }, { "content": "Purpose: The purpose of this study is to evaluate the linkages between the technological, market and political environment in the wind power industry and their contribution to market diffusion.\nDesign/methodology/approach: The evaluation is based on a literature review of the wind power industry and policy issues in selected countries, patent and financial analyses of leading European firms and semi-structured interviews of energy experts.\nFindings: The results reveal that the industry is policy-driven and appropriate energy politics are crucial in continuing the rapid wind power market diffusion during the next decades. Wind power technologies are in an accelerating stage of evolution, and competitive technologies contribute to market diffusion and firms' financial performance. However, without adequate energy subsidies and emission trading schemes, the industry will not be competitive in the energy markets, where other energy sources, including fossil fuels, are also subsidized.\nResearch limitations/implications: This case focuses mainly on the leading European industry actors and has a European perspective in policy issues. The analyses are limited to the main support mechanisms and countries where the diffusion of renewable energy has been rapid.\nPractical implications: The wind power industry is still in the emerging phase in its life cycle, and well-planned and efficiently implemented public support schemes are needed in order for the firms to compete successfully in the markets. The industry will probably be competitive without subsidies in 10-20 years.\nOriginality/value: The evaluation of the given indicators and political environment will give insight into the competitive environment and factors behind the diffusion of emerging technologies.\n", "role": "assistant" } ]
Teaching as constructive-developmental leadership: Insights from Mary Follett
[ "Mary Parker Follett", "Constructive-developmental leadership", "Teaching", "Learning" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The task of the teacher is not to make his students think of great things, but to think greatly of all things. We might say that from one point of view the essence of the teacher-student relation is a joint search for meaning [...] The complete meaning is never found in an isolated experience, only in related experiences (Follett, 1970, pp. 140-141).\nMary Parker Follett: Follett (1868-1933) was born and raised in a suburb of Boston. She attended Radcliff College, where several Harvard faculty members used to teach, including William James, the famous pragmatist philosopher, and Edwin Holt, the well-known behaviorist, both of whom influenced Follett's thinking. Based on her undergraduate research, Follett wrote her first book The Speaker of the House of Representatives in which she insightfully analyzed the functioning of the US Congress (Follett, 1896). Upon graduation, she got actively engaged in social work in Roxbury, an established neighborhood community widely recognized for its vocational training of the youth. Once she accumulated significant tenure in the area of social work, Follett became the Committee Chair of the Bostonian Women's Municipal League. She used her practical experience to develop her theory of democracy, which she formulated in her second published book entitled The New State (Follett, 1918). Continuing her work as social theorist and community activist, Follett eventually structured the insights acquired from this experience in her last published book Creative Experience (Follett, 1924). This last book established her as a pioneering historical figure of management thought (Novicevic et al., 2007). Based primarily on this work, management guru Drucker (1995) labeled her as a \"prophet of management\".\nConstructive-developmental theory of leadership: Constructive-developmental theory posits that the development of one's sense-making or meaning-making ability to construct reality evolves over time (Kegan, 1980). This development occurs through qualitatively different stages or orders of development, as the individual's sense-making system transforms while he or she learns and develops more complex ways of interpreting his or her life experience in relation to the self and others (Kegan, 1982). The \"sensemaking systems consists [sic] of principles, beliefs, thinking patterns and assumptions that govern the way individuals experience their lives\" (Valcea et al., 2011 p. 605). Through this developmental learning process, a person acquires experience, becomes more aware of his or her social environment, and eventually makes a developmental movement from stage to stage by developing a progressively more complex understanding of experience and transforming his or her worldview by taking into account multiple perspectives (Kegan, 1994). The developmental movement changes one's meaning-making system by making it more sophisticated and more appropriate to organize the person's increased understanding of his or her experiences (Kegan and Lahey, 2010). The appropriateness of these changes entails a more internally focused self-definition and more externally defined interpersonal focus (Strang and Kuhnert, 2009).\nFollett's conceptualization of teaching as leadership: Follett became interested in conceptualizing teaching as leadership as a result of her association with the program promoting the community use of public schools. Unfortunately, she was not able to articulate formally this interest as the symptoms of her chronic disease prevented her from writing a Guide to the Teaching of the Civics, which was requested by the National Municipal League in 1909. Follett's interest in conceptualizing teaching as leadership was reinvigorated again during her intense and productive, yet conflicting collaboration with Eduard Lindeman. Lindeman, who had practical teaching experience, was a Dewey disciple who adopted Dewey's concept of progressive communal education (Dewey, 1934/1964). Just like Follett did, Lindeman (1921) supported programs of community leadership. He popularized group-based active, participatory interactions through classroom discussion, which method was also promoted by Follett (1918) who viewed it as the primary path to enacting democratic social action and change.\nInstructional methods for teaching as constructive-developmental leadership: The reciprocal process of leading (teaching) and following (learning) enables both the teacher and the students to derive developmental meanings from their experiences provided an appropriate method of instructional delivery is selected. The understanding that students develop from the ongoing practice of teaching-as-leadership will likely to be transformative if the instruction stimulates their deliberate reflection. Follett relates the concepts of teaching and leadership because she observed that both teachers and managers have developed pragmatic mindset conducive to experimentation when searching for alternative ways to instruct others. Moreover, both teachers and managers need to have not only capacity for instruction but also the capability to share it with others. Only a teacher delivering instruction for transformative learning will be in a position to influence students to accept the challenge to transition to a higher stage of development.\nConclusion: In this paper we have presented Follett's conceptualization of the teacher-student relation using the lens of constructive-developmental theory of leadership. The main lesson learned from this conceptualization is that the teacher teaching today's diverse learners needs to understand how their beliefs change, develop, and construct their reality. As this construction of reality for development is central to their identity and their ability to manage learning experiences, the teacher needs to understand the way in which they filter, analyze, and organize their experiences at different stages of their constructive development. The teacher who discerns how diverse students differ in their individual ways of knowing will know how to support or challenge the shifts in their development in transformative ways. For this discernment, the teacher needs to understand the forms of these shifts, as well as the instructional methods that elicit them.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to derive practical recommendations from Follett's conceptualization of the student-teacher relations.\nDesign/methodology/approach: The paper employs a narrative historical interpretation of Follett's speech, which was originally given at the Boston University in the late Fall of 1928, but for the first time published in 1970.\nFindings: Follett's conceptualization of the teacher-student relation resonated well with the contemporary conceptualization of constructive-developmental theory of leadership.\nResearch limitations/implications: The findings of this study should be interpreted with recognition that the single case study has inherent limitations in terms of generalization.\nOriginality/value: This paper offers unique practical recommendations for instructional methods of experiential learning based on reflection, problem solving and critical thinking, which are based on the authors' analysis of Follett's works and constructive-developmental scholarship.\n", "role": "assistant" } ]
Comparing pedagogies for plastic waste management at university level
[ "Sustainability education", "Inquiry learning", "Affective learning", "Pedagogical design", "Plastic waste management", "Simulations/games" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Education plays an important role in the sustainable development of society (Bertschy et al., 2013; Leal Filho et al., 2015). To address the significance of such education, the United Nations Decade for Education for Sustainable Development 2004-2014 was implemented, the ultimate goal of which was to empower learners in their values and behaviors toward achieving sustainable development. This was to be achieved through learning tasks that developed learners' abilities to think critically, to foresee future scenarios and to make decisions in a collaborative manner (UNESCO, 2006). With more schools including education for sustainable development (ESD) in their curricula, ESD has emerged as a new area of research, particularly in higher education (Kopnina and Meijers, 2014; Thomas, 2014). Chalkley (2006) described higher education as contributing hugely to sustainability by equipping future leaders of society and business with knowledge, attitudes, values and skills that lead to a sustainable future. Higher education graduates from different programs are entering different professional sectors and help achieve environmental sustainability if they are equipped with such a preference.\nThe program: An eight-hour ESD Program, with two teaching approaches, was developed to educate participants on the current situations and shortcomings of plastic waste management in Hong Kong. In addition, possible socio-political and technological solutions were discussed from multiple perspectives. Recommended skills and procedures for recycling plastic waste were also shared with the participants, with the ultimate goal of empowering sustainable decisions.\nMethodology: Subjects\nResults: Enhancement of knowledge, attitude and behavior after the program\nDiscussion: The present study compared two teaching methods (GS and GI) in an ESD program on plastic waste management in a higher education institution. Regarding knowledge, both teaching approaches could significantly enhance the knowledge of local waste management, the 3Rs concept and plastic waste classification. The findings also suggested that both approaches were capable of imposing positive and significant changes on students' knowledge and intended behavior. In terms of attitude, the GS approach seems more effective in leading to significant changes when compared to the GI approach. This seems to be the only difference in the comparison of the two methods.\nConclusion and future research directions: To address the increasing importance of equipping students with sustainability competence in higher education, innovative and effective ESD programs shall be designed and remodeled. Adopting GS and GI, the current study has provided empirical evidence of positive learning outcomes as well as analysis for the three key elements in the pedagogical design. Both groups demonstrated significant improvements in knowledge and intended behavior by employing active learning activities that promoted action competence. With designs that triggered cognitive dissonance among students, higher order reflections in the affective domain were successfully facilitated. Hence, GS demonstrated a significant result in changing participants' attitudes in favor of plastic recycling within the eight-hour program.\n", "role": "user" }, { "content": "Purpose: This paper aims to compare the learning outcomes of gaming simulation and guided inquiry in sustainability education on plastic waste management. The current study targets the identification of success factors in these teaching approaches.\nDesign/methodology/approach: This study used a quasi-experimental design with undergraduate participants who were randomly assigned to an eight-hour sustainability education class using either gaming simulation or guided inquiry. Pre- and post-tests on students' knowledge, attitudes and intended behavior were conducted, followed by individual interviews to provide more detailed reflections on the teaching approach to which they were assigned.\nFindings: In terms of knowledge acquisition and behavioral changes, the quantitative results suggested that the pre-/post-test in-group differences were significant in both groups. More importantly, a significant positive attitudinal change was observed in the gaming simulation group only. In the interviews, participants attributed effective knowledge acquisition to active learning element in class, while the characterization of cognitive dissonance triggered in the gaming simulation induced subsequent affective changes.\nPractical implications: Activities in this program can be applied or modified to accommodate differences in other similar programs. The findings can also provide indicators to designs of similar programs in the future.\nOriginality/value: This paper explores plausible factors (ideology and implementation) that contribute to successful sustainability education programs. Through comparison between gaming simulation and guided inquiry, elements for effective education for sustainable development learning in the pedagogical designs are discussed.\n", "role": "assistant" } ]
How to educate engineers for/in sustainable development: Ten years of discussion, remaining challenges
[ "Sustainable development", "Universities", "Curricula", "Engineering schools", "Engineers", "Transition" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: After the launching of the Report of the World Commission on Environment and Development, the Brundtland report, scattered initiatives started to implement sustainable development (SD) into higher education. Engineering universities were not leading the change. Starting from 1994, Environmental Training in Engineering Education (Entree) conferences were organised that were mainly focused on environmental efficiency issues in education of engineers. In 2002, the first Engineering Education in Sustainable Development (EESD) conference took place. It opened the scope of Entree and aimed specifically at SD (Mulder et al., 2010). The conviction was that the engineer was indispensable for SD, but at the same time that the engineer should be educated differently for this key task. A main element was that the engineer had to learn to think long term and position his/her activities in a pathway towards long term sustainable solutions. This would require insight into the social environment of engineering as a technology push approach will not work. Developing insight in the relationship between technology and society requires active learning and pedagogical renewal.EESD conferences were repeated bi-annually. What insights have been developed and what challenges remain?\nThe challenges for renewal of engineering: 1. What should engineers learn on SD?\nThe future challenge for EESD: In its first decade, EESD has produced a lot of insight regarding the task ahead. With hindsight, one could say that the initial ideas to sustainabilise engineering education were somewhat naive: developing an add-on course, teach the other teachers about SD and create options to become SD specialists are just the first step, if we can achieve them at all. There is now at least some more clarity about the challenges that we face, and the barriers to introduce change.The next steps in EESD are not so much regarding what course we should add. Instead of adding something to an always \"crowded\" curriculum, we should address the question what a curriculum should look like to contribute to SD and to have motivated students and lecturers. Instead of adding SD to an unsustainable curriculum, we should rebuild curricula by taking the contribution of a field of expertise to SD as the leading principle for curricula.A next problem is the organisational and human resource policies: how should engineering universities renew their culture emphasizing commitment to contribute to society, solidarity, openness and creativity? How can we renew our organisations and their cultures? How can sustainability be integrated in human resource policies? Is this a leverage point for cultural change? Might this contribute to making engineering education creative, effective, societally engaged, open to other disciplines and really enjoyable?But what does it actually mean to be an engineer? What should be the role of engineers in the required sustainability transitions?If we consider SD a societal learning process, universities should be at the forefront of this, given that universities are supposed to be learning-centered organisations. However, universities should relearn to be learning organisations themselves and be able to transcend the rigid disciplinary fences. This question is not just crucial for the engineering community, but also to society at large as we are convinced that committed and engaged engineers are crucial for our common future.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to analyse the process of changing engineering universities towards sustainable development (SD). It outlines the types of changes needed, both in respect of approaches, visions, philosophies and cultural change, which are crucial for engineering universities which want to implement sustainable development as part of their progammes.\nDesign/methodology/approach: - The paper describes various experiences which show how SD education programmes can be implemented at universities, and some of the challenges faced in efforts towards achieving such a goal. It considers the various processes involved and raises some questions which can help to understand how universities, as learning organisations, can engage in the implementation of SD programmes.\nFindings: - The paper has established that engineers have to learn to think long term and position their activities in a pathway towards long-term sustainable solutions. This requires insight into the social environment of engineering as a technology, and the extent to which engineers should know about the intricacies of SD problems.\nOriginality/value: - The paper shows that engineers should understand the complexities of the societal setting in which they are developing solutions, and the complexities of making short-term improvements that fit into a long-term SD.\n", "role": "assistant" } ]
Production of entrepreneurship in small business activities of students
[ "Entrepreneurship education", "Action learning", "Small enterprise", "Vocational education" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Small enterprises and entrepreneurship play a central role in the economies of the European Union and Finland. According to a report issued by the European Commission (2008, p. 7), attitude, entrepreneurial behaviour and skills play an important role in planning education. In the Finnish vocational education system entrepreneurship is included in the curriculum for all students. The students commence their vocational education at the age of 16. They study for three years and afterwards face the choice of continuing their studies at the university of applied sciences, or another university, or seeking salaried employment or of becoming entrepreneurs. It is a stated goal of European Commission policy that the most important elements of entrepreneurship learning are the promotion of entrepreneurial skills and behaviour. According to Gibb (1993, p. 15), every student has entrepreneurial qualities that the environment may either encourage or suppress. In entrepreneurship education the challenge is to develop learning environments where learning is more practical-orientated, and targets developing entrepreneurial skills (Mwasalwiba, 2010; Nkirina, 2010). Realistic and action-orientated learning settings are the most effective in which to learn entrepreneurship (see e.g. Gibb, 2008, 2009; Hytti and O'Gorman, 2004; Rae, 2009). In teacher education, the focus is on the development of learning environments and pedagogic models that support entrepreneurship education (Ministry of Education, 2009), where the role of the teacher is to be a coach or facilitator. The entrepreneurially driven approach seems to suit teaching intended to develop entrepreneurial skills and behaviour (see e.g. Gibb, 1993; Cope, 2005; Heinonen and Poikkijoki, 2006), in other words, entrepreneurial readiness. Practical measures facilitate positive attitudes to entrepreneurship, the development of knowledge and skills related to entrepreneurship and entrepreneurial activity (Heinonen and Poikkijoki, 2006; Birdthistle et al., 2007). According to Draycott and Rae (2011, p. 140) learning entrepreneurship is a journey of exploration where the shared experiences of students make learning valuable.\nEntrepreneurial learning and action learning: How entrepreneurs learn can be seen as a model for entrepreneurship education. In a small enterprise, an entrepreneur acquires knowledge from experience, experimentation and by following others, then places the new information alongside earlier acquired information and organises the internalised knowledge for action (Holcomb et al., 2009, p. 172). Effective entrepreneurs are also good at learning from their environment: from customers, competitors, employees, partners (Gibb, 1997, p. 17; Smilor, 1997, p. 344). Both positive and negative experiences contribute to the entrepreneur's knowledge and provide guidance in making choices, through which the entrepreneur can continually develop activities (Elkjaer, 2003; Gibb, 1997, p. 16; Minniti and Bygrave, 2001, p. 5).\nA small enterprise as a learning environment: The model for learning in a small enterprise was developed back in the 1980s in England and has been deployed at various levels of education in Finland, in the form of vocational education student-run small enterprises and various projects with organisations. According to this model, entrepreneurship is linked with skills (e.g. problem solving, decision making) and attitudes (e.g. self-confidence, goal orientation) that affect behaviour. In entrepreneurial learning these skills, attitudes and forms of behaviour can be shaped by the environment and by tasks demanding an entrepreneurial approach. The application of that entrepreneurial approach will make it possible to spot new opportunities and to act like an entrepreneur (Gibb, 1993).\nMethodology: In this study, the activity is related to a certain case where events and the function of an individual are described in a specific real-world environment (Yin, 2003, p. 1; Yin, 2003, p. 13). A case study is a natural approach for researching teaching and learning where the question is the comprehensive inspection and illustration of practical problems (whether or not kiosk activities are appropriate for learning entrepreneurship) and that may not be detached from a certain context (Syrjala et al., 1994, p. 11; Yin, 2003, p. 13).\nFindings: The \"what\" and \"how\" of entrepreneurial learning in the activities of a small enterprise.\nConclusions: The intention of this study was to examine what is involved in learning entrepreneurship through working in a small enterprise and also to establish how learning is acquired. Learning in a small business context is often proposed as a model for learning entrepreneurship in education (Gibb, 1993). Yet there have been surprisingly few empirical studies examining this idea. Therefore, the main contribution of this paper lies in developing a new understanding of learning in small enterprises by empirically examining what learning takes place and how. According to the data, students learned both from their own thoughts and actions and from those of their colleagues. Action learning is in fact both an individual and a social process (Revans, 1982, p. 547). According to the current research findings, the learning that occurred in the particular small enterprise studied included both business learning and learning to act in an entrepreneurial fashion within the context of a real enterprise. What was learned could immediately be put into practice and the consequences of that enactment reflected upon in terms of whether the acquired information was sufficient, and applicable for the activity in question. The student entrepreneurs also had to consider the impact of learning and whether new learning generates any new problems, and if so, how to deal with them. Learning proved to be a cycle where contents and activities are tightly interlinked. However, the learning remained single-loop in that it was concentrated on daily tasks. Management learning is related to decision-making situations and action management as a result of them. My role was mainly to support the act of reflection and to guide the actions. I assume that the teacher would have an important role at the beginning, because the learning environment was new and the students were used receiving guidance throughout their basic education. However, during the year they managed to solve problems without guidance. Therefore the role of the teacher decreased, but at the vocational level the educator is still always present in some way.\n", "role": "user" }, { "content": "Purpose: The importance of entrepreneurship education and the lack of studies at the vocational level, have created a need to develop a model of entrepreneurship learning. The purpose of this paper is to determine what vocational college students learn about entrepreneurship from working in a small enterprise, how they learn it and what role the teacher has in the learning process.\nDesign/methodology/approach: The data used are drawn from discussions between eight students (aged 16-18), where they assessed and developed their business activities conducted over the course of a year. Using qualitative data collection and analysis methods, content connected with entrepreneurship was examined to provide information about activities they undertook and how the learning process occurred.\nFindings: The findings indicate that learning in a small enterprise is a continuous process, where content, activities and process are intertwined. The core of learning is reflection-in-action, to develop entrepreneurial skills and behaviour.\nPractical implications: This paper may offer a means of applying a small business learning model to entrepreneurship learning. The results can benefit teachers planning entrepreneurship learning.\nOriginality/value: This paper contributes to the understanding of student learning in small enterprises by empirically examining what learning takes place and how.\n", "role": "assistant" } ]
The role of chronological age and work experience on emotional labor: The mediating effect of emotional intelligence
[ "Emotional labour", "Chronological age", "Emotional intelligence", "Work experience" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Research into emotional labor has increased markedly over the past three decades since the groundbreaking work of Hochschild (1983), in part stimulated by developments in the labor market (Fisher and Ashkanasy, 2000; Zapf, 2002). Emotional labor refers to the process of emotional management at work in order to fulfill the emotional expression requirements within an organization (Grandey, 2000; Hochschild, 1983). Despite this growing scholarly interest in emotional labor, vital questions remain unanswered with regard to the individual characteristics of the antecedents of emotional labor (e.g. Bono and Vey, 2005; Fisher and Ashkanasy, 2000; Hulsheger et al., 2010; Rubin et al., 2005). Although previous research has examined the links between individual characteristics (e.g. age, gender, emotional intelligence (EI), affectivity, and personality type) and emotional labor, the results reveal large unexplained variations in the strength and direction of the effects (Dahling and Perez, 2010; Cheung and Tang, 2010). As such, it has been hard to reach reliable conclusions regarding the nature of these relationships, which in turn has hindered the formulation of clear directives for management practices.\nLiterature review and hypothesis development: Emotional labor and its antecedents\nResearch method: Data collection and participant characteristics\nResults: Reliability, validity, and common method bias\nDiscussion: The purpose of this study was to examine not only whether chronological age and work experience affect the selection of emotional labor strategies, but also whether EI plays a role as a mediator on the relationship between chronological age/work experience and emotional labor. Our findings were as follows. First, chronological age had a positively significant effect on surface acting, which did not support H1a. However, support for H1b was provided by a significant positive relationship between chronological age and deep acting. Second, work experience was negatively associated with surface acting and non-significantly associated with deep acting, which provided support for H1c but not H1d. Third, indirect effect from chronological age to emotional regulation strategies via EI was statistically not significant. Therefore, H2a and H2b were not supported. Fourth, indirect effect from work experience to surface acting via EI was not significant, which meant H2c was also not supported. Fifth, indirect effect from work experience to deep acting via EI was statistically significant. Hence the result supports H2d that EI plays a mediating role between work experience and deep acting.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to investigate how chronological age and work experience affect emotional labor strategies (i.e. deep acting and surface acting) through emotional intelligence (EI).\nDesign/methodology/approach: Structural equation modeling analysis provided support for the hypotheses based on a sample of 256 flight attendants working for four South Korean airlines.\nFindings: The results showed that chronological age has a positive effect on both surface and deep acting. The study also found that work experience has a negative influence on surface acting, whereas it has a non-significant effect on deep acting. In addition, the investigation suggests that EI mediates the relationship between work experience and deep acting.\nOriginality/value: The current study will add to the growing body of research on emotional labor by examining the effect of chronological age and work experience on emotional labor strategies through EI.\n", "role": "assistant" } ]
Polyphenoloxidases in Jerusalem artichoke (Helianthus tuberosus L.)
[ "Vegetables", "Food preservation" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: The Jerusalem artichoke (Helianthus tuberosus L.), in Europe Topinambur, of the family Asteraceae, is a sunflower with an edible, mostly inulin containing, tuberous root. It has many varieties (Stolzenburg, 2002a, b). The name comes from the Italian \"girasole articiocco\" or Sunflower artichoke. A native of North America, it grew in the wild along the eastern seaboard from Georgia to Nova Scotia. A review about the history, agronomic considerations of the Jerusalem artichoke as well as the composition of artichoke tubers and industrial application was given by Fleming and Grootwassink (1979). There are not enough studies on the chemical composition of the Jerusalem artichoke. Furthermore, no information is available on its polyphenoloxidases (PPO) activity.Polyphenoloxidases (EC 1.14.18.1) are a group of bifunctional copper proteins that are widely distributed from bacteria to mammals (Whitaker et al., 1995). They have been recognized as a main source of enzymatic browning. The three types of proteins related to PPOs are catechol oxidase, laccase and creolase (Sheptovitsky and Brudvig, 1996). They catalyze two reactions in the presence of molecular oxygen: hydroxylation of monophenols to o-diphenols (monophenolase activity) and oxidation of o-diphenols to their o-quinones (diphenolase activity) which then spontaneously polymerize into red, brown or black pigment (melanin). It is this latter property of PPO that makes it an important enzyme in the food industry. The steps as well as the substrates of the enzymatic browning, the physiological meaning of the PPO were noted in the scientific literature (Cheftel et al., 1984, Schneider, 1988). The principle and the reaction velocity were also descibed (Herrmann, 1976, Herrmann, 1978). It has been shown that browning in Jerusalem artichoke tubers extracts is attributed partially to the presence of PPO (Sawistowski et al., 1987). PPO catalyzing the oxidation of polyphenols to corresponding quinones is a key enzyme of enzymatic browning. Enzymatic browning is the main function of PPOs in fruits and vegetables, and it often is undesirable and responsible for unpleasant sensory qualities and losses in nutrient quality (Sanchez-Amat and Solano, 1997). This reaction is important in food preservation and processing and its regulation is important in the food industry. Sawistowski et al. (1987) inhibited the browning reaction in Jerusalem artichoke tubers in food processing using sulphite alone.The study was conducted to examine the importance as well as the activity and inactivation of the PPO in nine varieties of Jerusalem artichoke tubers, namely Gigant, Large White, Medius Bruckmann, Medius Lindhoop, Petit Blanc, RoZo, Stamm, Waldspindel, and Wildacker.\nMaterial and method: Apparatus\nMethod: Extraction of PPO\nResults and discussion: The enzyme content of a solution will be generally determined by measuring the enzyme activity. The results (Figure 1) showed that the highest PPO activity was measured in the Medius Lindhoop skins (3,026 nkat), followed by Waldspindel (2,309 nkat), Gigant (2,256 nkat), RoZo (2,043 nkat), Wildacker (1,818 nkat), Stamm (1,805 nkat), Petit Blanc (1,592 nkat) and Medius Bruckmann (1,592 nkat). The sort Large White exhibit the lowest PPO activity in the skins (1,274 nkat). There were nearly no enzyme activity in their pulp (2 up to 5 nkat). The PPO activity was found mainly in the skins for all varieties tested. That means the browning reactions can be considerable reduced by removing the skins. These results are affected like any natural product by the harvest period. The PPO activity shall decrease during the maturity as free phenolic compound consumed up in protecting the plant from injuries and desiccations. This showed the role of PPO as antioxidant (radical scavenger) in the plant body (Coseteng and Lee, 1987, Geetika, 2003). In this work, we have established that 3,4-Dyhydroxyphenylalanine (DOPA) was a substrate for the Jerusalem artichoke PPO. Laccases, monoamine oxidases, cytochrome oxidases or peroxidases are also able to oxidize DOPA (Palmer, 1963).Effect of temperature on PPO activity\nConclusion: The phenolic compounds as well as the PPO were localized in the skins of Jerusalem artichoke tubers. The Jerusalem artichoke tuber skins analysed exhibits high PPO activity (1,274 nkat to 3,026 nkat). It is necessary to inactivate the PPO in order to provide unchanged phenolic compounds for nutrition.An optimum temperature of 60degC and a optimum pH of 7.0 for Jerusalem artichoke PPO have been found in this work. Their inactivation is possible at acidic ranges of pH. The PPO activity is negligible at the pH ranges under pH 4.5. Lemon juice exhibits as natural product an ideal combination of ascorbic and citric acid for the enzymatic inactivation. It is possible to replace sulphite, a chemical agent that is normally used for the inactivation of the PPO in the food industries with natural reducing agents and heat. That means the processing of Jerusalem artichoke tubers can be done in respect of the norms of organic (bio) food production and by following the EEC guidelines for organic food. A linear correlation was not found between phenolic compounds and PPO. In this work the PPO activity was determined with the crude enzyme.\n", "role": "user" }, { "content": "Purpose: - The purpose of this study was to investigate the polyphenoloxidases (PPO) activities in Jerusalem artichoke tubers as well as their inactivation. The following are important for the processing of this food: treatment and conversion used in preserving processes.\nDesign/methodology/approach: - The PPO of whole tubers and the separated tuber skins as determined using a modified PPO essay with 0.01 M dopamine.HCl as reactant and photometry (DA470 plotted against reaction time up to 3 min). Nine varieties used in German tuber production were in the test.\nFindings: - The highest enzyme activity of PPO was in the skin, range from 1,274 up to 3,026 nkat. In the pulps of the tubers nearly traces of PPO could be detected (range: 2 up to 5 nkat). The inhibition of PPO activity in tuber homogenates was investigated in simulated processes which were used in the food industry: heat (drying, pasteurisation) and oxidase inhibitors such as lemon juice, ascorbic and/ or citric acid. The optimum temperature of PPO was 60degC and the inactivation occurred at 85degC. The range of the PPO activity was between pH 5 and 10, with the optimum at pH 7.0. Lemon juice is a natural inhibitor of the PPO activity in fruit and vegetable juices containing Jerusalem artichoke as well as in canning the tubers. Lemon juice combined with citric acid and ascorbic acid is recommended against enzymatic browning reaction.\nOriginality/value: - The characterisation of the parameters which influence enzymatic browning of Jerusalem artichoke tubers during food processing: heat and pH, have not been described before.\n", "role": "assistant" } ]
Leader-follower congruence in loneliness, LMX and turnover intention
[ "Leader-follower congruence", "Workplace loneliness", "LMX", "Turnover intention" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: In recent years, feeling lonely in the workplace has become an issue that has aroused growing concern, and the concept of workplace loneliness has been proposed as a type of unpleasant feeling resulting from a person's perception of lacking satisfactory social relationships in the workplace (Lam and Lau, 2012). Such a sense of loneliness felt in the workplace will affect not only followers' organizational commitment (Ayazlar and Guzel, 2014) and work performance (Ozcelik and Barsade, 2011), but also their organizational citizenship behavior and leader-member exchange (LMX) (Lam and Lau, 2012).\nTheories and hypotheses: The four different scenarios of congruence and incongruence in leader-follower loneliness\nMethod: Participants and procedure\nAnalyses: In order to test H1-H3, we used polynomial regressions and response surface modeling (Edwards and Parry, 1993). In particular, we first estimated the following equation (to simplify, we omitted all control variables in the presentation):\nResults: Table I shows the means, standard deviations, intercorrelations, and coefficient as of the variables. Among the control variables relating to LMX and turnover intention, only dyadic tenure is found to correlate negatively with turnover intention (r=-0.16, p\nDiscussion: From the dyadic perspective of leaders and followers, we examined the leader-follower congruence/incongruence effect of workplace loneliness on followers' turnover intention and the role of LMX in mediating such effects. It was found that LMX was higher when leaders and followers were congruent at workplace loneliness than otherwise. The low-low congruence condition had higher LMX than the high-high congruence condition, more detrimental effects on LMX were produced when followers were at a higher level of workplace loneliness than leaders, and LMX partially mediated the congruence/incongruence effect of leader-follower workplace loneliness on follower turnover intention.\nConclusion: From the dyadic perspective of leaders and followers, we discovered the leader-follower congruence effects of workplace loneliness on predicting LMX, and such effects could be carried to follower turnover intention by LMX. This study has therefore highlighted the significance of leaders' congruence with followers on workplace loneliness. By matching leaders with followers on workplace loneliness, organizations may counter the detrimental effects brought about by loneliness in the workplace.\n", "role": "user" }, { "content": "Purpose: Research on workplace loneliness has thus far been dominated by perceptions of followers; hence, few researchers have considered the perspective of leader-follower congruence. The purpose of this paper is to examine how the leader-follower relationship mediates the relationship between leader-follower congruence/incongruence in workplace loneliness and turnover intentions.\nDesign/methodology/approach: Data were collected from 232 leader-follower dyads from ten companies in China. Polynomial regression combined with the response surface methodology was used to test the hypotheses.\nFindings: Four conclusions were drawn. First, leader-member exchange (LMX) was higher when leaders and followers were aligned in terms of workplace loneliness than otherwise. Second, in the case of leader-follower congruence, LMX rose as their workplace loneliness fell. Third, in the case of incongruence, followers had lower LMX when they were lonelier than their leaders. Finally, LMX partially mediated the leader-follower congruence/incongruence effect of workplace loneliness on followers' turnover intention.\nOriginality/value: This study emphasized the importance of leaders' congruence with followers in workplace loneliness. Additionally, it extended research on leader-follower congruence from a positive perspective to a negative one.\n", "role": "assistant" } ]
Enhancing brand equity through emotions and experience: the banking sector
[ "Customer experience", "Banking sector", "Brand equity", "Dominance", "Pleasure", "Arousal" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In a globalized world with enormous competition, the banking sector is struggling for providing good experiences to their customers. Customers expect a high level of experience from banks, which, in turn, could increase customer satisfaction and customer retention levels (Putnam, 2012; Brakus et al., 2009).\n2. Theoretical framework: 2.1 Stimulus-organism-response model\n3. Research model and hypotheses: The research model (see Figure 1) is developed based on the S-O-R framework. The stimuli are represented by bank experience dimensions, which encompass: accessibility, ease of doing business, executional excellence, personalized offering, staff engagement, value for money and reputation.\n4. Method: 4.1 Questionnaire design\n5. Results: First, we conducted an exploratory factor analysis with the principal component and Varimax rotation for the bank experience (since this construct is adapted from a consulting approach and does not come directly from scales of past studies). The items of bank experience generated seven factors that accounted for 64 percent of total variance. The value of Kaiser-Meyer-Olkin Measure of Sampling Adequacy--0.861--demonstrates a good correlation between items. Bartlett's test of sphericity (1,218.259, p<0.001) also revealed significant. Confirmatory factorial analysis (using LISREL 8.80) was employed to analyze factor stability. The goodness-of-fit indexes revealed a good fit: kh2/df=3.4, p<0.05; CFI =0.94; RMSEA =0.08 (Bagozzi and Yi, 1988).\n6. Discussion and implications: In the current study, we employed the attributes of bank experience comprised in seven main categories or dimensions by KPMG International (2014) that focused on financial organizations. These stimuli positively influence the customer emotional states, which, in turn, contribute to enhancing the perception of brand equity among customers from Generation Y. Thus, in terms of theoretical contribution, we extended the S-O-R framework to congregate attributes of bank experience as stimuli and brand equity as response.\n", "role": "user" }, { "content": "Purpose: The banking sector is devoted to attracting Generation Y customers with their particular tastes and interests. The purpose of this paper is to explore for the first time how bank experience influences emotions and perception of brand equity. Using the stimulus-organism-response (S-O-R) framework, the authors consider the perception of brand equity as the outcome.\nDesign/methodology/approach: A panel based on the method of Walsh and Beatty (2007) with completed and usable questionnaires was used to test the proposed model. The authors selected a sample of 211 respondents. After eliminating some inconsistencies, a final data sample consisted of 205 usable survey participants (male: 58 percent; female: 42 percent).\nFindings: Executional excellence, staff engagement and value for money are the most relevant indicators in shaping the overall bank experience. Pleasure is the positive emotion that most enhances the perception of brand equity.\nOriginality/value: This study extends the S-O-R model by employing dimensions of experience as stimuli and brand equity as response. The study demonstrates the role of emotions (particularly pleasure) in enhancing brand equity among individuals from the Generation Y.\n", "role": "assistant" } ]
Predicting abandonment of brand social media accounts
[ "Social media", "Marketing", "Online communities", "Survival analysis", "Network analysis" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Social media has not only changed how individuals consume information and manage their interpersonal relationships, it has also transformed how brands communicate with and develop sustained relationships with consumers. Many brands have invested in social media strategically in order to reach key audiences and to engage them in conversation by using tailored content and messages designed to accomplish specific communication goals (e.g. crisis management or increased sales). Brand successes over the past decade have led many brands to adopt a \"digital first\" approach where the investment of time and resources on social media platforms is a top priority (Travis, 2001). Indeed, many corporations use pages on platforms like Facebook, Twitter, and Instagram to establish their brand identity and to help disseminate information related to their products, marketing efforts, and social responsibility campaigns (DiStaso and McCorkindale, 2013). As a result, brands are now an everyday part of the fabric of social media where consumers expect to be able to find and connect with their favorite brands.\nLiterature review: Abandoned social media accounts\nMethod: Data collection\nResults: While representing a minority of brand accounts, 16% (n = 525) of all accounts (n = 3,380) were abandoned making them relatively common. Indeed, 71 of 100 brands had at least one abandoned account with Nike (n = 41) having the most followed by IBM (n = 37), Google (n = 33), and Intel (n = 33). Sixty-two percent of accounts that were abandoned were allowed to become inactive (n = 323), while 25% were deleted (n = 134) by the account holder. In contrast, only 13% of accounts protected (n = 68) their tweets to restrict access to their account. It is noteworthy that abandoned accounts are not typically stillborn. Among inactive accounts, the only type of abandoned accounts where it is possible to retroactively identify when they were abandoned, at the time of abandonment over 85% (n = 272) were a year or older, 50% (n = 161) were active for over three years, and 21% (n = 68) exceeded five years of activity. The average age at abandonment for inactive accounts was 3.17 years (SD = 1.94) with the oldest abandoned account, @SamsungMobileCA, being 7.5 years old at the time of its abandonment. In contrast, the average account age overall was 5.89 years (SD = 2.1) with @Starbucks posting for over 10.5 years making it the oldest brand account in the dataset. In short, abandoned accounts make up a sizeable minority of brand accounts, most commonly simply become inactive, and are relatively active before their abandonment.The contributions of different types of variables in predicting account abandonment were assessed using three nested survival analysis models. First, model one considered the effect of the number of followers and the account's verification status on the time until the account is abandoned. The overall model fit the data well, (kh2(2) = 138.37, p < 0.001), with a -2 log-likelihood of 4,353.72 which differed from the deviance of the null model by 145.05 (p < 0.001). Next, model two examined whether an account's posting behavior contributed significantly to the ability to predict the survival time of an account. Model two also represented a good fit to the data, (kh2(6) = 489.54, p < 0.001), and provided a significant improvement in fit over model one, (kh2(4) = 297.34, p < 0.001). Finally, model three incorporated accounts' positions within retweet and addressivity networks. Model three exhibited a good fit to the data, (kh2(8) = 544.54, p < 0.001), and represented an improved fit over model two, (kh2(2) = 25.89, p < 0.001). Thus, the position of accounts within their respective brand networks explains variance above that explained by the number of followers, verification status, or account posting behavior. Table 1 provides model summaries and, unless otherwise stated, reported values are from model three.Hypothesis one stated that unverified account will have increased risk of abandonment compared to verified accounts. Verification status was a significant predictor across all three models, b = 0.61, p < 0.001, with the positive coefficient indicating increased risk for unverified accounts to be abandoned. Therefore, hypothesis one is supported.Hypothesis two proposed that the number of account followers would be negatively associated with the risk of abandonment. Although the results of model one found that the number of followers was negatively associated with the risk of account abandonment, b = -0.77, p < 0.001, it was not significant in model two after the inclusion of variables representing the brands posting behavior, b = -0.04, p > 0.05, or in model three, b = 0.02, p > 0.05. Thus, hypothesis two is not supported.Hypothesis three consisted of two complementary hypotheses regarding the effect of an accounts posting behavior on the risk of abandonment. Hypothesis three A held that an accounts inconsistency in posting would be positively associated with the risk of account abandonment. After controlling for all other variables, posting inconsistency was positively associated with the likelihood of account abandonment across all three models, b = 4.17, p < 0.001. Hypothesis three A is supported; accounts that were less consistent in posting had an increased likelihood of abandonment.Hypothesis three B posited that the volume of posts would be negatively associated with account abandonment. In other words, brands are less likely to abandon accounts that post more. The results supported this finding that an increased volume of posts was negatively associated with abandonment, b = -7.89, p < 001. Thus, hypothesis three B is also supported.Finally, hypothesis four held that centrality with a brand's network of accounts is negatively associated with account abandonment. The results show that the in-degrees of an account within the addressivity network were negatively related to account abandonment, b = -0.48, p < 0.05, after controlling for all other variables included the accounts posting volume and consistency and the number of account followers. In other words, when other brand accounts frequently mentioned or replied to an account it had a decreased risk of abandonment. Likewise, after controlling for all other variables there is a negative relationship between the out-degree of an account's retweets and its risk of being abandoned, b = -0.55, p < 0.01, such that accounts that are frequently retweeted (i.e. have their posts republished) by other brand accounts are buffered against the risk of abandonment. Hypothesis four is supported.In short, while several covariates predict the survival time of a social media account, the number of followers surprisingly does not. Decreased risk was associated with verified account status, higher post volume, increased consistency in posting, and centrality in networks representing the interaction of same-brand accounts.\nDiscussion: The diffusion of innovations theory suggests that brands found it advantageous and cost effective to adopt social media to accomplish a variety of goals (e.g. customer service, marketing, public relations, etc.). Furthermore, social media's low barrier of entry means that brands can afford to experiment by creating large numbers of accounts designed to fulfill different functions or meet the needs of specific demographics. Unfortunately, not all social media accounts are ultimately successful; some accounts fall inactive or are closed. This study modeled the risk of account abandonment over time to test which factors contributed to brands ultimately abandoning their Twitter accounts. As healthy brand social media accounts require the participation of both brands and consumers, it was hypothesized that the factors that contributed to healthy accounts would address the needs of both parties.The results indicated that the potential audience for an account, in terms of a number of account followers, was not a significant predictor for account abandonment after controlling for the accounts posting behavior or position within the brand network. On one hand, this is somewhat surprising given that many benefits of social media to brands depend upon a large community of users. However, brands have specific goals for participating in social media and it is rational to abandon social media accounts with a large reach if those goals are unmet. For example, advertising campaigns have limited life spans and the associated accounts may close after the campaign has run its course. Likewise, accounts for products that are defunct or are no longer available in a regional market may be shuttered. Finally, brand initiatives can shift so that accounts that promote previous goals or values (e.g. environmentally friendly automotive technology) may be closed so as not to contradict current brand messages. While there is a need for additional research into the content of the accounts, it is clear that audience size is a poor predictor of account survival.The posting behavior of an account did prove to be an important predictor of account survival in terms of both the volume and consistency of posting. The risk that an account would be abandoned decreased as the volume of an account's posts increased. This likely reflects the effort and value the brand places upon an account. More interesting is that as posting became more erratic (i.e. less consistent) the risk that an account would be abandoned increased. As operationalized in this study, this effect is independent of the posting volume and highlights the importance for brands to consistently engage with their community of followers. Accounts that post inconsistently may have less utility to consumers as they are not a reliable source of information about the brand. Likewise, brand communities, which center on interactions around a brand or its products and services, may benefit from content that can serve as a focal point for interaction.Brand accounts that were more central in networks of same-brand accounts had lower risk of abandonment. There are several reasons why accounts may benefit from their network position. First, the interactions (e.g. mentions, replies, and retweets) that comprise the network links may help engaged consumers find other brand accounts of which they were unaware. Thus, accounts with high centrality have greater exposure to the larger Twitter community. Additionally, for unverified accounts (i.e. accounts not verified by Twitter to belong to the brand) these interactions may confer trust and credibility by verifying the account's authenticity through its interactions with known brand accounts. Finally, interactions among the accounts, even in the absence of consumer participation, may promote a human conversational tone that promotes bonding and, ultimately, engagement with the account.This study provides three key insights regarding brand social media accounts. First, the findings suggest that it is less important to have a high volume of posts but rather to post consistently. This is likely a result of how Twitter presents posts to its users; the Twitter news feed presents content in reverse chronological order meaning that content older than a few hours or days, depending on how many accounts a user follows, is below the fold and will not be seen without extensive scrolling. Second, this study highlights how individual accounts serve as points of contact for the brand and can work synergistically to influence their collective success or failure. This finding suggests that an expansive perspective, considering not just the health of individual accounts but rather the larger brand network, could be beneficial. Accounts that are structurally isolated from the larger network of brand accounts are potentially underutilized and in danger of abandonment. Finally, most accounts were simply allowed to fall inactive but a sizeable minority, roughly 25% of abandoned accounts, were deleted rather than protected. Deleting an account rather than protecting or simply allowing it to fall inactive is ill advised as it precludes the opportunity for brands to inform followers where they can find the brand's current social media presence. More seriously, bad actors can appropriate the screen name and impersonate the brand. During our identification of abandoned accounts, the authors observed several cases of screen names that were subsequently associated with new accounts as determined by Twitter's unique account identification numbers. A best practice would be to re-register the defunct screen name of deleted accounts immediately under a protected brand account.This study also demonstrates the utility of survival analyses for assessing social media account performance. Although survival analyses are common in medical and engineering research, it is rarely used to evaluate social media data (Chancellor et al., 2016; Yang and Counts, 2010; Zhang et al., 2012) despite the fact that the automated timestamps associated with events such as posting and account creation are conducive to this form of analysis. In the current study, survival analysis allows for assessment of risk of abandonment at any given point in time and, thus, can serve as a diagnostic tool in assessing account health in the moment as opposed to modeling cumulative risk over the study's period. Furthermore, survival analysis can be used to answer questions of particular interest to both social media researchers and practitioners. For example, it could be used to assess how long users continue to engage with an account over time as well as the variables that contribute to engagement. It can also compare survival rates of groups over time providing a useful statistic for A/B testing. In short, survival analysis models are well suited to social media data and could be a fruitfully applied in future social media research.Limitations and future directions of research\nConclusion: In conclusion, this paper found that posting volume and consistency as well as measures of account centrality significantly predicted the risk for account abandonment while the number of account followers did not. This paper is, to our knowledge, the first to consider how multiple brand social media accounts influence the success and failure of one another and provides a jumping off point for further research on account abandonment. However, in closing we wish to remind readers that account abandonment is not synonymous with account failure. Brands and consumers have goals that they wish to achieve when engaging on social media and it is ultimately the success or failure of those goals that likely leads to abandonment.\n", "role": "user" }, { "content": "Purpose: The purpose of this study is to describe online factors that predict abandonment of social media accounts.\nDesign/methodology/approach: This study conducts an analysis of 3,380 Twitter corporate brand accounts belonging to Interbrand's top 100 global brands to determine if brand posting behavior and network position predicts the length of time accounts survive.\nFindings: Results found that both posting volume and consistency as well as network position were significant predictors of account survival time yet surprisingly, the number of account followers was not. Brand social media accounts were often not formally closed or protected but allowed to fall inactive.\nOriginality/value: Despite the existence of a healthy body of research on why individuals quit social media, there is little research on how brands' abandon their social media accounts or the factors that predict abandonment. This study identifies indicators for at-risk accounts and provides guidance regarding best practices when abandoning accounts.\n", "role": "assistant" } ]
Revisiting the tourism-economic growth nexus: evidence from the United States
[ "GDP", "Tourism", "Economic growth", "Granger causality", "Bounds test" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Tourism has gained widespread attention in recent years because of its potential contribution to the global economy. As one of the most rapidly growing sectors in the world, it contributes to economic growth by creating jobs, generating additional revenue streams and enhancing technology and information transfers across-borders (WTO, 2015). For developing countries, it also provides the impetus for infrastructure improvement and serves as an important source for foreign exchange earnings necessary for capital, consumer and intermediate goods imports (Oh, 2005; Phiri, 2016). The USA is an important player in the global tourism industry. In 2016, approximately 75.6 million international tourists visited the USA, the second most visited country in the world after France (WTO, 2017), and generated about $205.9bn in revenues, the most among all countries (WTO, 2017).\nA brief review of literature: The theoretical evidence on the relationship between the tourism sector and economic growth is well-documented. One stream of literature explores the long-run relationship between the two using either endogenous or exogenous growth models (Hazari and Sgro, 2004; Lanza and Pigliaru, 1995, 2000; Lozano et al., 2008), and proposed the tourism-led economic growth hypothesis (Panagiotidis et al., 2012), which argues for a unidirectional causality running from tourism to economic growth. The proposition views international tourism as a strategic factor for long-run economic growth, generating direct, indirect or induced effect on other productive sectors of the economy (Tugcu, 2014). Specifically, tourism positively affects the destination country by:\nEmpirical methods: The variables employed include international TA to the USA which is a proxy for the country's tourism industry development, the RGDP which is an indicator of economic growth, and the REER of US dollars, i.e. the weighted average of US dollars relative to a basket of other major currencies after adjusting for relative consumer prices. The REER partially captures the relative prices tourist will pay at the host country compared to other destinations and changes in REER should indicate how travel cost varies due to either currency value fluctuations or relative price movement. We model TA in the USA as in equation (1):\nData: Monthly data on real GDP in the USA, international TA in the USA, and the REER of US dollars from January 1996 to March 2016 are used for our analysis. Data on international TA were obtained from the National Travel and Tourism Office of the US Department of Commerce, and are used to measure the level of inbound tourist activity in the USA. An alternative metric commonly used in the literature is tourism receipts. However, these data are only available for a much shorter period. Data on real GDP, measured in billion dollars, were obtained from the Bureau of Economic Analysis and Macroadviser.com. The REER data, calculated as weighted averages of bilateral exchange rate adjusted by relative consumer price and indexed at 2010 dollars, were obtained from the Federal Reserve Bank of St. Louis.\nResults and discussions: All three variables are converted to their natural logarithms to reduce skewness and kurtosis. Additionally, the first difference of logarithmic values allows for the interpretation of relative effect in regression analysis. Though the bounds test can be applied to variables that are either stationary[3] or nonstationary, the test is inapplicable when any of the variables are integrated of order two or higher (Pesaran et al., 2001). The first step of our empirical analysis, therefore, is to investigate the stationarity properties of the three variables. A nonstationary variable tends to have time-changing means and variances. Applying standard statistical procedures to nonstationary time series data may generate spurious results driven by trends or random walks. Here, we use several conventional unit roots tests i.e. the Augmented Dickey-Fuller (ADF) test, the Phillips-Perron (PP) test, the Kwiatkowski, Phillips, Schmidt and Shin (KPSS) test, and the Dickey-Fuller Generalized Least Square (DF-GLS) test to determine the stationarity of variables.\nConclusions and policy implications: The tourism industry plays an important role in the USA economy at the local, state and national levels. The industry has been the source of revenue generation, employment creation and the development and transfer of human capital, all of which are essential to economic growth. Notwithstanding, very little empirical work has been conducted to explore the nature of the causal relationship between tourism demand and economic growth in the USA. This paper seeks to fill this critical gap in the literature by analyzing the short-run, long-run and overall causality between tourism, economic growth and exchange rates, and how they respond to shocks originated in the system. We use time series analysis procedures, mainly the Pesaran et al. (2001) bounds test, the Granger causality test based on the ARDL framework, and the impulse response and forecast error VDCs to analyze the monthly tourism arrivals to the USA, GDP, and REERs of the US dollars for the period of January 1996 to March 2016.\n", "role": "user" }, { "content": "Purpose: The purpose of this study is to investigate the causal linkages between tourism and economic growth in the USA and determine how they respond to shocks in the system.\nDesign/methodology/approach: The study uses a variety of time series procedures, including the bounds test, Granger causality test, impulse response functions and generalized variance decomposition to analyze the relationship between monthly tourist arrivals (TA) to the USA, real gross domestic product (GDP) and real effective exchange rates.\nFindings: Results suggest that GDP Granger causes TA in the USA in the long run, indicating the economy-driven tourism growth hypothesis. Additionally, a shock to GDP generates a positive and significant effect on TA that persists in the long-run, while exchange rate shocks only have a significant effect in the first six months.\nResearch limitations/implications: Different tourism sectors may exert different degrees of influence on the economy. The use of aggregate data on TA in the analysis assumes homogeneity in the industry, thus, only represents the average relationship between tourism and GDP.\nPractical implications: This study provides insight that shapes the investment, marketing, sustainability decisions of the public and private sectors aim at increasing tourist flows to drive economic development at the national, state and local levels.\nOriginality/value: Though several studies have examined the factors influencing the international tourist demand of the USA, this is the first to investigate the causal relationships between tourism, GDP and exchange rates for the USA. It is also the first in the US tourism literature to account for the nature of interactions between the three variables because of innovations in the system.\n", "role": "assistant" } ]
The value of group affiliation: evidence from the 2008 financial crisis
[ "Business group", "Financial crisis", "Corporate governance", "Related party transaction", "Finance" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: From the chaebols of Korea to the business houses of India to the groups of Latin America, business groups are ubiquitous around the world, and are the dominant form of enterprise throughout most emerging markets. These groups typically consist of legally independent firms operating in multiple industries, which are bound together by persistent formal (e.g. equity) and informal (e.g. family) ties.\n2. Literature: 2.1 Business groups\n3. Data, measurement and statistics: 3.1 Sample\n4. Empirical analysis: 4.1 The valuation of group affiliation\n5. The analysis of entrepreneurial firms: Entrepreneurial firms play an important role in China's economy. For instance, entrepreneurial firms accounted for 60 percent of China GDP and provided 75 percent of new jobs in 2011. The different ownership structures (SOEs and entrepreneurial firms) may generate different incentives and affect the value of group affiliation during the 2008 crisis.\n6. Conclusion: Business groups are ubiquitous in most economies around the world. Yet it is unclear how group affiliation affects the value of affiliated firms. The 2008 financial crisis furnishes us with an opportunity to observe the performance of group firms in a declining economy. We find that firm value falls with group affiliation as a result of expropriation of minority shareholders by large shareholders during the crisis. The decrease of stock returns of affiliated firms is 1.91 percent more than that of independent firms from September to December in 2008. The value reduction from business group affiliation is more obvious for firms in industries more seriously affected by the crisis. We also find that corporate governance can make up for the negative effects of group affiliation. Specifically, hiring well-reputed accounting firms, separating the roles of chairman and CEO, and having multiple large shareholders help mitigate the devaluing effect of affiliation during the crisis. Good corporate governance hinders tunneling by controlling shareholders and reduces the negative effects of group affiliation when investment opportunities are substantially limited. We further document that much of the value decline from group affiliation comes from related party transactions between listed companies and controlling shareholders, by which controlling shareholders profit at the expense of minority shareholders, such as loan guarantees, asset acquisitions and equity transfers. Finally, we compare SOEs and entrepreneurial firms and discover that the value reduction from group affiliation is more significant for entrepreneurial firms whose ultimate shareholders actually benefit from tunneling and have high incentives for expropriation.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to observe listed firms in China during the 2008 financial crisis and investigates how group affiliation affects firm value when the economy turns down. The paper focusses the study on answering the following questions: during the crisis, do affiliated firms have higher or lower stock returns than independent firms? Does corporate governance relate to the value of group firms? How does group affiliation influence firm value? Does performance of affiliated entrepreneurial firms differ from affiliated state-owned enterprises (SOEs)?\nDesign/methodology/approach: The paper uses non-parametric tests and regression analysis on a sample of 1,469 Chinese listed companies to investigate the research questions.\nFindings: Affiliated firms have lower stock returns than independent firms by 1.91 percent during September to December of 2008. This poor performance is even worse for firms seriously shocked by the crisis. Good corporate governance can mitigate the negative effects of group affiliation on firm value. The lower valuation of affiliated firms lies in the fact that controlling shareholders undertake more related party transactions at the expense of minority shareholders. Finally, although business groups can provide internal financing for entrepreneurial firms in China, affiliated entrepreneurial firms experience a larger value decrease than affiliated SOEs due to the conflict interest between controlling and minority shareholders.\nOriginality/value: This research provides unique evidence about the performance of group-affiliated firms during the 2008 financial crisis and documents the mechanisms through which group affiliation influences firm value.\n", "role": "assistant" } ]
A proposed model of expatriates in multinational corporations
[ "Globalization", "Multinational companies", "Expatriates", "Cross‐cultural studies", "Taiwan", "United States of America" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: In the last few decades, business has become increasingly international. As international competition continues to intensify, the need for global strategic perspectives increases. One of the greatest challenges is how to manage human resources on a global scale. To be implemented successfully, a global business strategy requires global human resource systems and globally competent people. Human resource personnel need to understand the multi-faceted nature of their employees' identities and they need to maximize the potential of each employee. Hence, an understanding of the identities and traditions of these persons becomes imperative in the workplace of today's global organization.In the current era of rapid globalization, enterprises can pursue business opportunities abroad. Many multinational corporations (MNCs) constantly expand their businesses beyond their own national boundaries and plan increasing globalization. Expatriation represents the most expensive staffing strategy for the multinational corporation, but it remains a viable method for increasing the corporation's understanding of international operations. While organizations may perceive expatriation as an attractive method for accumulating foreign markets, they face the challenges of selection and management of the most appropriate individuals. Thus the factors contributing to the successful expatriation experience are significant to MNCs.Statement of the problem\nLiterature review: Culture\nMethodology: Research design\nResults: Instrument reliability\nSummary, discussion, and conclusion: Summary\nDiscussion: Culture is the acquired knowledge people use to interpret experiences and to guide their behaviors. It is manifest at three levels:1. behaviors and artifacts;2. beliefs and values; and3. underlying assumptions.In the business world, culture can affect technology transfer, managerial ideology and attitudes, and even business-government relations. Perhaps most important, culture affects how people think and behave (Schein, 1968).MNCs must understand national cultures in order to conduct business in the global arena, because national culture effects the way companies negotiate, market, organize, lead, communicate, etc. (Harris and Moran, 1991; Hofstede, 1980). To be successful contenders in that global arena, MNCs must be able to adapt their organizational practices to fit the business practices of the countries in which they operate. MNCs today operate in a world market that demands a more culturally sensitive management (Harris and Moran, 1991).Job satisfaction is a topic of wide interest to both people who work in organizations and people who study them. There are important reasons why we should be concerned with job satisfaction, which can be classified according to the focus on the employee or the organization. First, the humanitarian perspective is that people deserve to be treated fairly and with respect. Job satisfaction is to some extent a reflection of good treatment. It also can be consider an indicator of emotional well-being or psychological health. Second, the utilitarian perspective is that job satisfaction can lead to behavior by employees that affects organizational functioning. Furthermore, job satisfaction can be a reflection of organizational functioning (Spector, 1997).A job satisfaction facet can be concerned with any aspect or part of a job. Facets frequently assessed include rewards such as pay or fringe benefits, other people such as coworkers or supervisors, the nature of the work itself, and the organization itself. Job satisfaction is important in everyday life. Organizations have tremendous effects on the people who work in them. It is certainly within everyone's best interest for our organizations to function efficiently and smoothly. Organization practices that maximize job satisfaction will likely have employees who are more cooperative and willing to help the organization be successful. Organizations can do much to help in this area by doing things to enhance job satisfaction.Socialization is the systematic means by which firms bring new members into their culture (Berlew and Hall, 1966). The socialization process assists expatriate managers in learning the appropriate national culture of the subsidiary. It encompasses the process of being made a member of a group, \"learning the ropes\", and being taught how one must communicate and interact to get things done (Pascale, 1985; Van Maanen and Schein, 1979). This organization socialization is often identified as the primary process by which people adapt to new jobs and organizational roles.Chao et al. (1994) provided empirical support that the socialization process is an ongoing process for individuals and that different levels and changes in socialization are related to career effectiveness. Socialization is not only an important issue for organization newcomers, but it is important for established organizational members as well. The need for resocialization among organizational members may be most salient as people experience job changes. These changes may be formal changes much as promotions, transfers, or reassignments, or they may be subtle changes with a formal job title. For people who remain in their formal positions, change in personnel, work procedures, or organizational goals or values may trigger a need for new learning and role adjustment. Thus organizational socialization can be a pervasive aspect of organizational behavior and career development.The goals for a successful cross-cultural assignment are to choose the best person for the situation, optimize their effectiveness, and maximize the performance of entire organization. Successful cross-cultural assignments, by strengthening the communication channels, also provide a first hand source of information on the economic and political circumstances. Consequently, in addition to accomplishing the immediate concrete objectives, successful cross-cultural assignments can increase the corporation's international reputation and profits.Conclusion\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to examine the relationship between job satisfaction, family support, learning orientation, organizational socialization and cross-cultural training and cross-cultural adjustment in the proposed model.\nDesign/methodology/approach: - A quantitative research method was used, and correction and regression were employed. The study undertook a multidimensional approach in its assessment of the adjustment of Taiwanese financial institution expatriates.\nFindings: - This study found that job satisfaction played an important role in the proposed model of expatriate adjustment in an international assignment. Also found to be of importance was the role of organization socialization.\nResearch limitations/implications: - The conclusions of this study pertain only to Taiwanese financial institution expatriates in the USA, and cannot be generalized for cross-cultural adjustment in other countries.\nPractical implications: - Given the associations between job satisfaction and cross-cultural adjustment, multinationals should ensure that they have human resource policies and practice to support the job satisfaction of expatriates. Modifying socialization policies and practices can have a positive influence on expatriates' adjustment.\nOriginality/value: - This study both replicates and extends previous research on cross-cultural adjustment. It provides objective information for expatriate selection, management and socialization.\n", "role": "assistant" } ]
An empirical examination of brand loyalty
[ "Brand loyalty", "Consumer goods" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: An executive summary for managers and executive readers can be found at the end of this article.\nIntroduction: Representing one of the most important factors believed to explain consumer brand choices, it is no surprise that the concept of brand loyalty has aroused an enormous interest among academics as well as practitioners within the field of marketing and consumer behaviour. Firms with large groups of loyal customers have been shown to have large market shares, and market share, in turn, has been shown to be associated with higher rates of return on investment (Buzzell et al., 1975; Raj, 1985; Reichheld and Sasser, 1990). Dick and Basu (1994) suggest that brand loyalty favours positive worth of mouth and greater resistance among loyal customers to competitive strategies. Obviously such findings encourage marketers to build and maintain brand loyalty among customers. When striving for such goals, information on factors determining the creation of brand loyalty among customers becomes an important matter.This article investigates the importance of the relative attitude to the determination of brand loyalty. Our empirical investigation focuses on the market of frequently purchased consumer goods. In relation to this, we agree with the dictum put forward by Rundle-Thiele and Bennet (2001) that \"variation between the characteristics of each market indicates that the measures used to capture brand loyalty should be very different, as will the antecedent variables\" (p. 28). Furthermore, by examining the relative attitude as an antecedent of repeat purchasing we assume a determinist approach to brand loyalty rather than the more behavioural oriented stochastic approach (Odin et al., 2001). From a purely stochastic approach, brand loyalty is considered tantamount to repeat purchasing and grounded on no manifest factors determining the behaviour. It is impossible to detect any antecedents of repeat purchases, and therefore companies gain no understanding of how to build brand loyalty. From a determinist approach brand loyalty is conceptualised more like an attitude or intention to purchase and it is believed that the researcher can investigate the factors producing brand loyalty. Marketers investigating these factors may therefore gain valuable insights into the creation and retaining of brand loyalty among customers.Although the stochastic approach seems very useful for explaining consumer purchase behaviour of fast-moving consumer goods (e.g. powder, detergent, toothpaste etc.), we do believe that even for frequently purchased consumer goods the purchase decisions are rarely made on a purely arbitrary basis. Therefore, we lean to the composite definition of brand loyalty which was originally suggested by Day (1969) and later supported by other researchers (e.g. Jacoby, 1971; Dick and Basu, 1994; Assael, 1998). Jacoby (1971) defines brand loyalty as repeat purchase but clearly points out that this behaviour is a function of psychological processes. In other words, repeat purchase is not just an arbitrary response but the result of some proceeding factors (for example psychological, emotional or situational factors). Likewise Dick and Basu (1994) point out that even a relatively important repeat purchase may not reflect true loyalty to a product but may merely result from situational conditions such as brands stocked by the retailer. In their framework, attitude is a requirement for true loyalty to occur. Consequently, they define repeat purchasing without a favourable attitude as spurious loyalty. Similar thoughts are found in Assael (1998) who conceptualizes brand loyalty as repeat purchase under high involvement and defines repeat purchase under low involvement as inertia. The relationship between attitude and behaviour is well accepted among consumer researchers although this relationship appears to be most likely when applied to high involvement situations (Ajzen and Fishbein, 1980). Assuming that, even with regard to frequently purchased consumer goods, some consumers will exert more involvement to the purchase decision than others, this article investigates the relationship between relative attitude and repeat purchasing. Understanding the role of relative attitude to brand loyalty is important for brand managers in order to enhance and maintain consumers' repeat purchasing of their brand. Only if an increase in relative attitude results in an increase in repeat purchase it is meaningful for marketers seeking to influence repeat purchasing through attitude building marketing strategies.\nModel development and research hypotheses: Figure 1 presents our conceptual model of loyalty formation focusing on the relationship between relative attitude and repeat purchasing.We adopt the composite perspective of brand loyalty first suggested by Day (1969) and later supported by other researchers (e.g. Jacoby, 1971; Dick and Basu, 1994). The composite approach to loyalty claims that to be truly loyal the consumer must hold a favourable attitude toward the brand in addition to repeat purchasing it. Accordingly, we suggest that relative attitude positively influence repeat purchasing. However, in our research model we do not expect a direct influence from relative attitude on repeat purchases but rather hypothesise relative attitude to have an indirect influence on repeat purchase through variety-seeking and enhancing resistance to situational factors. More precisely we expect higher relative attitude to have a negative effect on the consumer's tendency towards variety-seeking and a positive effect on the consumers resistance to situational factors triggering brand switching (e.g. \"out-of stock\" situations). The concept of relative attitude as well as the hypothesised relationships is further discussed in the following sections.Relative attitude as a two-dimensional construct\nData collection: The data used in this paper were obtained as part of a large survey of brand loyalty. Data were collected in Odense, the third largest city in Denmark, situated on the island of Funen. Random sampling using the Odense telephone directory drew 600 phone numbers. Households were first contacted by phone in order to get their acceptance of participation and to set up an appointment for distribution of questionnaires. Questionnaires were distributed to the respondents by use of the \"drop-off-call-back\" method (e.g. Hair et al., 1998). The following six non-durables were investigated: shampoo, toilet paper, coffee, chocolate, tooth pasta and washing powder. In order to improve response rate it was decided to split the product sample into two identical questionnaires including only three non-durables each. For each product category the respondents were asked various questions concerning their attitudes to and actual purchase of the products. Additionally there were a number of questions concerning household demographics and general purchase patterns. 348 questionnaires were returned leaving us with a potential of 1,046 product cases to be analysed. However, for each of the investigated products we decided only to include answers from persons indicating that they had at least sharing influence on brand decision and that they usually bought this product for the household. Furthermore, cases with \"don't know\" responses to any items to be included in the measurement model were excluded. The application of these procedures resulted in a total of 395 cases, distributed across product types in a satisfactory way (all products were represented by 46 to 78 cases).\nMeasurements: Repeat purchasing is measured by a single item. In this study we asked respondents for each product category, to indicate how many times of the last five purchases they bought the same brand. Although we recognize that this is a measure of past behaviour, we believe that such a measure is a reasonable indicator for future repeat purchasing. An alternative way to measure behavioural loyalty could be the perceived probability of purchasing the same brand in the future (Jacoby and Chestnut, 1978). Considering the weak relationship between attitude and behaviour often found in studies of low involvement situations, such an indicator would neither be a very reliable measure of future repeat purchasing, particularly not with regard to frequently purchased goods markets.Except for repeat purchasing, all constructs in our conceptual model are latent unobservable variables. In order to obtain reasonable accurate measures of such constructs it is recommended to developed multi-item scales including two or more items. In our study we created scales with two or three items, mostly adapted from previous research. Each construct and its corresponding items is displayed in the Appendix. All items were measured on a seven-point Likert-like scale ranging from 1 = \"completely disagree\" to 7 = \"completely agree\".Dick and Basu (1994) conceptualize relative attitude as a composite of attitudinal strength and attitudinal differentiation. In their GRID-model the two dimensions are treated as independent constructs. Dick and Basu (1994) do not question the assumption of independency, nor do they suggest how to measure the two constructs. In this study we suggest attitudinal strength and attitudinal differentiation to be represented by purchase involvement and perceived brand differentiation respectively and propose relative attitude to be a latent variable (or function) comprised of these two subdimensions. To accomplish this measurement we will specify a second-order factor model, which suggests that the second-order factor \"relative attitude\" is a composite (or function) of the first-order factors \"purchase involvement\" and \"perceived brand differentiation\".Purchase involvement was assessed by Ratchford's (1987) three-item involvement scale. The three items are shown as INV1 to INV3 in the Appendix.Considering perceived brand differences as the opposite to the concept of brand parity we adapted three items from Muncy's (1996) five-item scale of perceived brand parity. The three items are named PBD1 to PBD3 in the Appendix.In order to measure consumers' resistance to situational factors we use two items from Munchy (1996). The two items are named RSF1 and RSF2 in the Appendix.Variety-seeking is measured with two items framed with reference to Van Trijp et al. (1996) and is referred to as VS1 and VS2 in the Appendix.\nResults: The model in Figure 1 was translated into a LISREL model consisting of a measurement part (confirmatory factor analysis) and a structural equation part (simultaneous linear regression). The relationship between variables was estimated by maximum likelihood estimation. A two-stage approach (see Anderson and Gerbing, 1988; Gerbing and Anderson, 1992) tested the proposed model. First, conducting confirmatory factor analysis on the applied multi-item scales developed the measurement model. Next, the measurement model and the structural equation paths were estimated simultaneously to test the proposed model (overall model). By applying this two-stage method we want to ensure that the measures of the constructs are reliable and valid before attempting to draw conclusions about relations between constructs.Measurement model\nSummary and implications: The results of our study confirm to a large extend the proposed hypotheses and suggest a number of important managerial implications for consumer researchers and brand managers. First, to the best of our knowledge, this is the first study that attempts to measure relative attitude as a second factor composite of perceived brand differences and purchase involvement. Considering the high reliability (alpha = 0.92) and predictive validity (see the acceptance of H1 and H2) found in our conceptualisation and measurement of relative attitude, the results lend some support to Dick and Basu's (1994) two dimensional conceptualisation of relative attitude. Second, as hypothesised relative attitude has a significant influence on variety-seeking as well as resistance to situational factors, providing support for some of the propositions forwarded by Dick and Basu (1994). Third, although resistance to situational factors turns out to be the most important factor in predicting repeat purchasing, separating intrinsically motivated variety-seeking from extrinsically imposed brand switching, reveals that consumers' variety-seeking tendencies may play a mediating role inhibiting resistance to situational factors.The results have implications for brand managers, especially with respect to marketing frequently purchased consumer goods. The results show that consumers with a high relative attitude are less prone of variety-seeking, more resistant to out of stock situations and competitors offers, and consequently more likely to keep loyal to their usual brand. Thus, it is still possible for marketers to create loyal consumers by building positive attitudes towards their products. Our conceptualisation and measurement of relative attitude as a composite of purchase involvement and perceived brand differences suggest two ways for marketers to enhance relative attitudes towards their brand. First, marketers should try to increase consumers' purchase involvement, for example trying to relate the consumption situation to consumers' value system. Second, marketers should clearly differentiate their own brand from competing alternatives by telling the consumer why and how their brand is better than alternative brands. Within certain product categories (e.g. food products) consumers are more likely to seeking variations in their purchases. Since consumers' tendencies to variety-seeking make them more alert to offers from competing brands, it is important for marketers competing on such markets to expand their product lines (e.g. tee products with many different flavours) making it possible for the consumer to vary their purchase experiences without having to switch brands.\nLimitations and future research directions: Our study is focusing on the frequently purchased consumer goods market, and the results may therefore not necessarily be generalised to other market types. Other studies should test our model on other market types. Also, our results are limited by the nature of measuring repeat purchases. In this study we asked respondents for each product category to indicate how many times of the last five purchases they bought the same brand. Although we recognize that this is a measure of past behaviour we believe that such a measure is a reasonable indicator of future repeat purchasing. An alternative operationalisation of behavioural loyalty could be the perceived probability of purchasing the same brand in the future (Jacoby and Chestnut, 1978). Considering the weak relationship between attitude and behaviour often found in studies of low involvement situations, such an indicator would neither be a very reliable measure of future repeat purchasing, particularly not with regard to frequently purchased goods markets. Further research may wish to investigate brand loyalty through longitudinal studies.\n", "role": "user" }, { "content": "Purpose: - This article aims to measure relative attitude as a latent two-dimensional second-order factor and to investigate the relationship between relative attitude and repeat purchasing.\nDesign/methodology/approach: - A conceptual model of attitude-behaviour consistency and brand loyalty is proposed and empirically tested in the context of frequently purchased consumer goods. Structural equation modelling was used on survey data from 395 households to test the model and corresponding hypotheses.\nFindings: - The results support the conceptualization of relative attitude as a composite of purchase involvement and perceived brand differences and also support the hypotheses proposed in our research model. Theoretical and managerial implications are discussed at the end of the article.\nOriginality/value: - Survey data from 395 households was used to test the model and corresponding hypotheses.\n", "role": "assistant" } ]
Educational restoration: a foundational model inspired by ecological restoration
[ "Educational leadership", "School improvement", "Educational policy", "School reform", "Green schools", "Whole school sustainability" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The industrial model is causing harm across our planet. The harm is not just environmental; it is also social and economic. Global warming and the consequential changes occurring in our localized climates is just one example that has far-reaching implications for individuals and communities. Other resources detail the complex array of challenges facing humanity due in large part to overconsumption (Leonard, 2010) and overshooting basic planetary boundaries (Steffen et al., 2015; Wijkman and Rockstrom, 2012; Rockstrom, 2009). These challenges are glocal (Brooks and Normore, 2010), with causes and solutions grounded in local to global scales. Broadly, the movement to address these interdisciplinary challenges is known as the sustainability movement, likely the largest social movement in human history (Hawken, 2007). Scholars across diverse fields are contributing to the emerging field of sustainability science, with over 12,000 published articles per year, up dramatically from the few hundred in the early 1990s (Kajikawa et al., 2014). Since the United Nations highlighted the importance of education to the sustainability movement, with their declaration of 2005-2014 the Decade of Education for Sustainable Development, higher education (Stephens and Graham, 2010; Karatzoglou, 2013; Hoover and Harder, 2015; Figueiro and Raufflet, 2015) and PK-12 curriculum studies (McKeown and Nolet, 2013; Nolet, 2009, 2015) have made substantial strides toward engaging with the sustainability movement. The field of educational leadership has lagged behind these other efforts.\nPurpose: The purpose of this paper is to articulate, and advocate for, a deep shift in how we conceptualize and enact school leadership and reform. Much of the school reform literature, policy and practice, either explicitly or implicitly, aims to improve the factory model of education. Rarely has the school reform literature challenged the most fundamental assumptions upon which modern education sits. Factory-like facilities, rigid schedules, bell notifications, defined grades, curriculum standardization and ability grouping remain pervasive across schools today, even as educators acknowledge the need for a fundamental re-design of the system in order to provide students twenty-first century learning experiences. In this paper, we propose dismantling the factory model and grounding ourselves in our basic human nature as living systems (Capra and Luigi, 2014; Capra, 1996, 2002). This re-design is more appropriately called educational restoration rather than educational reform, drawing on the field of ecological restoration for inspiration. Ecological restoration, a field four decades old, has garnered powerful lessons applicable to education, a highly over-engineered natural system. As Hawken (2007) explained in his book, Blessed Unrest:Ecological restoration is extraordinarily simple: You remove whatever prevents the system from healing itself. Social restoration is no different. We have the heart, knowledge, money, and sense to optimize our social and ecological fabric. It is time for all that is harmful to leave.(p. 189)\nA case for educational restoration: During the early nineteenth century, Americans called their place for schooling a house, a one-room schoolhouse where children learned what books could teach. This early schooling represented \"a small and, to many, an incidental part of the total education the community provided\" (Tyack, 1974, pp. 14-15). Schooling was interwoven within myriad of learning experiences that took place on farms, in craftsmen's shops, at corner stores and through church and other civic organizations where children learned in and from the entire community (Tyack, 1974). Adequate for rural, agricultural communities, this locally embedded form of schooling failed to meet the needs of the complex urban, industrial society that emerged in the latter half of the century. A modern multi-classroom building and the system it housed, no longer stood as a part of the community, but rather as a self-sufficient enterprise, equipped to lead students beyond pre-industrial society. Here students learned the necessary values, attitudes and habits of the modern industrialized world.\nEducational restoration: Shifting from the factory model of education to a living systems model of whole school sustainability (applying sustainability aims to all aspects of school design, management and leadership) requires transformational strategies more associated with nature and life than machines. This deep change cannot be mandated. Rather, the restraints of industrialization need to be removed so that natural learning processes might emerge and thrive. Ecological restoration, a relatively new subfield within ecology and less than 40 years old (Jackson et al., 1995; Perring et al., 2015), provides hopeful guidance for this challenging endeavor. Clewell and Aronson (2013) explained, ecological restoration is \"an intentional activity that reinitiates ecological processes that were interrupted when an ecosystem was impaired\" (p. 3). The aim of ecological restoration is not necessarily to return a living system to a past, static state, but rather to remove barriers to healthy functioning (Jackson and Hobbs, 2009). It is worth noting that in this young field not all agree on the primary aims of ecological restoration. In addition to the \"recovery\" model that will be our focus in this paper, there are \"legacy\" models that aim to restore systems to a defined past state and \"utility\" models that focus on restoring specific ecosystem services (Woodworth, 2013, p. xviii).\nRestoring learning ecologies as living systems: evidence of promise: Although the actual number of green schools is still a very small percentage of schools overall, early exemplars are demonstrating that green is the way of the future (see many examples of current practice in The Green Schools Catalyst Quarterly, an online journal: https://greenschoolsnationalnetwork.org/gscq/). Green schools provide a high leverage strategy for addressing many twenty-first century challenges, from student engagement and performance to climate change and community resilience. A recent study of green school leaders found that 99 percent of respondents reported improvements in student engagement and 77 percent reported improvements in community engagement following their greening efforts (Sterrett et al., 2014). These positive learning results, as well as dollar savings and environmental benefits are increasingly attracting attention (Kensler and Uline, 2017).\nDiscussion: Efforts to restore our ecologies of learning, in ways that better serve human and planetary well-being as well as students' love of learning, are happening across the USA and around the world. As living systems, these activities are not occurring in a standardized, formulaic way. They are happening organically, in a diversity of degrees. The work is particularly visible in the green schools movement, with the practice of whole school sustainability. Whole school sustainability, as a compilation, is a practical pathway out of our industrialized model of education. Educational restoration presents an explicit framework for re-conceptualizing leadership, policy and programming for school improvement. This framework presents a structure for restoring the natural capacity of educational practice and research to better cultivate learning, while also serving glocal socio-ecological needs.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to articulate, and advocate for, a deep shift in how the authors conceptualize and enact school leadership and reform. The authors challenge fundamental conceptions regarding educational systems and call for a dramatic shift from the factory model to a living systems model of schooling. The authors call is not a metaphorical call. The authors propose embracing assumptions grounded in the basic human nature as living systems. Green school leaders, practicing whole school sustainability, provide emerging examples of educational restoration.\nDesign/methodology/approach: School reform models have implicitly and even explicitly embraced industrialized assumptions about students and learning. Shifting from the factory model of education to a living systems model of whole school sustainability requires transformational strategies more associated with nature and life than machines. Ecological restoration provides the basis for the model of educational restoration.\nFindings: Educational restoration, as proposed here, makes nature a central player in the conversations about ecologies of learning, both to improve the quality of learning for students and to better align educational practice with social, economic and environmental needs of the time. Educational leaders at all levels of the educational system have critical roles to play in deconstructing factory model schooling and reform. The proposed framework for educational restoration raises new questions and makes these opportunities visible. Discussion of this framework begins with ecological circumstances and then addresses, values, commitment and judgments.\nPractical implications: Educational restoration will affect every aspect of teaching, learning and leading. It will demand new approaches to leadership preparation. This new landscape of educational practice is wide open for innovative approaches to research, preparation and practice across the field of educational leadership.\nOriginality/value: The model of educational restoration provides a conceptual foundation for future research and leadership practice.\n", "role": "assistant" } ]
From creative process engagement to performance: bidirectional support
[ "In-role performance", "Creative process engagement", "Giving support", "Receiving support" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: As the competition between companies has become increasingly heated, how to solve the problems creatively in the workplace is more important than ever before (Treffinger et al., 2008). Employees have the natural tendency to engage in some creative activities or different creative processes while carrying out their tasks (Eschleman et al., 2014; Zhang and Bartol, 2010a). The engagement in creative processes, namely, creative process engagement, is expected to be increasingly critical as a component for individual success (Gilson and Shalley, 2004). As a high level of job performance is the most crucial sign of success in work (Ng et al., 2005), how creative process engagement can help employees to achieve better performance should be concerned.\nLiterature review and hypotheses development: Creative process engagement and in-role performance\nMethod: Sample and procedure\nResults: We conducted a confirmatory factor analysis to examine the distinctiveness of our scales for all the variables used. The proposed measurement model fitted the data well (kh2 =128.899, df=215, CFI=0.941, IFI=0.941, RMSEA=0.067). Additionally, we assumed that employees themselves knew well about their behaviors in work, including the behaviors in creative activities, their interactions with supervisors, and their in-role performance. Considering this, we collected our data from self-reports of respondents. Therefore, there may be a problem of common method variance which we used Harman single-factor method to test (Podsakoff et al., 2003). We conducted factor analysis to all the variables. Results revealed that methods variance accounted for 24.45 percent in the measures, which indicated that the influence of common method variance was not substantial.\nDiscussion: In line with our first hypothesis, we found that creative process engagement had a positive effect on in-role performance, which suggested that engaging in creative activities was a good way to improve in-role performance for employees. Then, we found that the moderating effect of receiving support on the relationship between creative process engagement and in-role performance was not significant. One reason for this result could be that as employees become more concerned with realizing their self-worth, the outcome of an enhanced in-role performance is not the only way for them to prove their values. Another important way is reflected in the process they develop for improving their in-role performance such as engaging in creative process engagement and the exchange with their supervisors. As engaging in creative activities is a kind of extra-role behavior (Kiazad et al., 2014), when spend more time and efforts on these activities, employees may feel themselves more valuable in their organizations. In addition, for the three stages of creative process engagement take more time and efforts (Shalley and Gilson, 2004), solving the problems creatively is the evidence of one's capability. Individuals may think that receiving supervisor support is a sign of incompetence in the view of their supervisors or other employees. As seen in other cases of dependency, being helped can be associated with decreased self-esteem and depressed feelings on the part of the recipients (Nadler, 1987; Nadler and Fisher, 1976). In this way, receiving support from supervisors may weaken the positive role of creative process engagement on in-role performance.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to explore the effect of creative process engagement on employees' in-role performance, and does so by considering the support that employees received from and given to their supervisors.\nDesign/methodology/approach: Using data from 540 questionnaires collected in China, this paper conducts a hierarchical regression analysis to test the proposed model.\nFindings: Creative process engagement positively affects employees' in-role performance. However, the moderating effect of receiving support on the above relationship is not significant. Instead, it is the interaction of receiving support from and giving it to supervisors that moderates the relationship between creative process engagement and in-role performance.\nResearch limitations/implications: The study has some contributions to the conservation of resource (COR) theory. The authors find that acquiring new resources such as receiving support from supervisors is not always effective. The acquisition process of resources should be considered with the investment process of resources. According to the COR theory, people invest resources to gain resources and protect themselves from losing resources or to recover from resource loss (Halbesleben et al., 2014). The findings of the study show that employees investing resources is not just for gaining resources. Sometimes, they invest resources such as giving support to supervisors to remain a relatively balanced relationship.\nPractical implications: Companies can encourage employees to place more attention on creative process engagement to improve in-role performance. In addition, when offering support to employees, managers should consider whether the employees are able to give it back in response to the received support, and distribute their support to employees accordingly.\nOriginality/value: This paper explored employee's engagement at creative process in a more novel way and clarified the relative effect of creative process engagement on in-role performance. Also, this paper was the first to pay attention to the bidirectional nature of supervisor support.\n", "role": "assistant" } ]
Maintaining digital collections with declining resources, fewer staff
[ "Volunteers", "Accessibility", "Digital collections", "Communication", "Digitization workflows", "Staffing" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction/background: The Maureen and Mike Mansfield Library (Mansfield Library) at the University of Montana established a Digital Production Unit and formally began creating collections of digitized content in 2010. The Unit was led by a Digital Projects Librarian and employed one full-time staff member as well as a handful of students. With a sole focus on digital collections, the Librarian, staff member and students digitized materials, created metadata and uploaded items to the Library's online digital collections repositories. In 2013, several months after the Digital Projects Librarian had moved on to another institution, a new librarian was hired to lead the Digital Production Unit with the new title Digital Initiatives Librarian.\nModel workflow and staffing: The Digital Initiatives Librarian continued to oversee the established workflow for digital collections, which typically began with a project proposal. The Library's Archives and Special Collections (ASC) Unit as well as entities outside the University proposed several digital collections over the years. The Digital Projects Committee, chaired by the Digital Initiatives Librarian and populated by faculty and staff representing all units within the Library, reviewed and evaluated proposals. The Committee's primary criteria for approval or rejection of a proposal included project scope, project size, preservation and access requirements, consideration of copyright and required resources.\nDisruptions to the model: An ongoing decline in the University and Library budgets since 2013 has resulted in fewer resources for digital collections. In addition, and partly as a result of this decline, a number of Library staff members have decided to seek employment elsewhere. In summer 2015, the full-time staff member in the Digital Production Unit quit. A few months later, before this position was re-filled, the University president announced layoffs across campus. The Library Dean elected to forfeit vacant positions, including the one from the Digital Production Unit, rather than discharge working library employees. In fall 2015, two staff members voluntarily left their positions in BMS. In fall 2016, another BMS employee retired. Only one of these BMS positions was subsequently refilled, leaving only two of the remaining three BMS staff members to assist, as time permitted, with metadata for digital collections. Figure 2 illustrates the decline in staff over time. The loss of staff members and positions in the Digital Production and BMS Units severely impacted the model digital collections workflow. Digitization and file processing now depended solely on the fluctuating, part-time work of five or six student workers, and the remaining BMS staff members had a lot less time to devote to digital collections metadata.\nCarrying on: The Digital Production and BMS Units were expected initially to continue to create digital collections even while time and resources for these collections dwindled. While students continued to capture and process digitized images in the Digital Production Unit, the Digital Initiatives Librarian had to work collaboratively and creatively with the Head of BMS to find workflow efficiencies, particularly for metadata creation, and to obtain the personnel time needed to address the disruptions to the model workflow. Furthermore, these Librarians worked together to send consistent messages to the rest of the Library about the consequences of declining resources, including fewer staff members and positions, for digital collections. This messaging ultimately laid the groundwork for both the Digital Production Unit and BMS to scale back, but not halt, digital collection activities. The following examples describe their efforts and related outcomes.\nModified workflows and volunteers: Benefits and challenges: While it is usually still possible to do something, and there are benefits to doing so, there are also several challenges to finding alternatives to the model workflow and the standard staffing solutions. There is a body of research wherein authors specifically explicate the benefits and challenges of volunteers in academic libraries (Smallwood and Sanborn, 2016; Throgmorton, 2016; Barlett, 2013; Crumpton, 2013; Forrest, 2012; Hoy, 2011; Schobernd et al., 2009). Here is brief summary of many of the points that are described in the literature and that also held true at the Mansfield Library.\nConclusions: A series of circumstances over the past several years contributed to an ever-shifting landscape wherein the model digital collections workflow became less feasible over time. Rather than halt operations in the Digital Production Unit altogether - which was an (unfavorable) option - the Digital Initiatives Librarian worked with the Head of BMS to find alternative solutions for making progress on in-process digital collections, for enhancing and maintaining existing ones, and for realistically managing a digital collections program given resource and personnel-related constraints.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to describe methods for restructuring workflows and efficiently using staff members and volunteers to continue work on multiple, simultaneous digital collections as budgets and resources decline.\nDesign/methodology/approach: This paper describes one library's varied approaches to several digital collections supported by literature or volunteers in libraries.\nFindings: In the face of continually declining resources and new, time-sensitive priorities and compliance responsibilities, librarians can continue to maintain digital collections by modifying workflows, using the services of volunteers and communicating strategically.\nPractical implications: This paper is relevant to librarians, archivists and others who are looking for ways to justify and capitalize on the use of unconventional personnel in digital collections programs.\nOriginality/value: This paper presents a case of the successful use of volunteers to accomplish digital collections-related tasks in an academic library and provides a communication-based strategy for addressing some of the challenges related to volunteers in academic libraries.\n", "role": "assistant" } ]
Corporate heritage brands: Mead's theory of the past
[ "Brands", "Identity", "Heritage", "History", "Authenticity", "Nostalgia" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Brand heritage has been a leitmotif within the marketing canon for at least three decades (Smith and Steadman, 1981), while the notion of corporate brands has evolved over the past twenty years (Balmer, 1995, 1998). However, it is only recently that the more specific idea of corporate heritage brands has entered the corporate marketing management lexicon.Interest in the concept of corporate heritage brands dates back to the work of a triumvirate of scholars, who first identified the notion and articulated some of the key characteristics of the corporate heritage brand construct via their scrutiny of monarchies as corporate brands (Balmer et al., 2006). A year later, a second paper provided a more substantive scrutiny of the corporate brand heritage phenomenon (Urde et al., 2007). Since then, scholarly interest in the territory has begun to burgeon (Blomback and Brunninge, 2009; Hudson, 2011; Wiedmann et al., 2011).While we recognise that extant scholarship has greatly advanced our understanding of customer preference and management action in this area, nonetheless many tangential questions remain unanswered. Perhaps most importantly, the fundamental mechanisms of consumer behaviour relating to heritage brands remain elusive.We hope to contribute to this topic by adapting a relatively obscure theory about the role of the past in human behaviour, which was first developed by the pioneering sociologist George Herbert Mead (1929, 1932). Drawing on his ideas, we conceptualise why and how corporate heritage brands, and more especially their historical dimensions, appeal to consumers. We also offer a framework, based on a four-part taxonomy of heritage effects, which can assist practitioners in developing more effective corporate communications for older companies.Although the primary focus of this article is corporate heritage brands per se, there are many corporate-level constructs relating to the past (Balmer, 2011a). We believe that Mead's theory is also of considerable utility in understanding other historically-related concepts, such as authenticity and nostalgia.In communications terms, researchers have observed that the corporate communications and corporate design strategies of long-established companies habitually - through one or more of their outward-directed formal corporate communications channels - make reference to organisational provenance or history. In certain instances, the aforementioned are emblematic of a category of institution whose brands can be characterised as corporate heritage brands. These are brands which have bona fide heritage characteristics and whose strategic positioning is informed by corporate brand heritage or corporate identity heritage (Balmer, 2009, 2011a, b; Balmer et al., 2006; Hudson, 2011; Urde et al., 2007).\n2. Mead's theory of the past: George Herbert Mead (1863-1931) was a pioneer of the sociology discipline. Although his scholarship is almost a century old, much of it remains valid and useful for contemporary scholars (Stryker, 2008). A search of recent academic literature yielded more than 1,000 articles that cite Mead, including more than 800 in sociology and more than 175 in business journals.Mead is best known for his contributions to the theory of symbolic interactionism. This suggests that an individual develops a sense of self through reflection about and comparison to external people, entities or objects (Mead, 1934). In the marketing and consumer behaviour literature, Mead has been cited repeatedly on the subject of symbolic interactionism (Goulding, 1999; Leigh and Gabel, 1992; Reed, 2002; Schouten, 1991; Solomon, 1983) and related topics such as identity (Belk, 1989; Christensen and Askegaard, 2001; Mittal, 2006).While Mead is less well known for his theory of the past, it has been suggested that his thoughts about the historical process inform his other work (Adam, 1990; Flaherty and Fine, 2001). For Mead, time was a fundamental component in his larger effort to understand identity. He argued that events and people in the past, including prior versions of oneself, serve as points of comparison in the process of symbolic interaction. As subsequent scholars have observed, \"Mead shows us that the self is above all a temporal process\" (Flaherty and Fine, 2001). This has influenced some to conclude that we should \"reconceptualise human agency as a temporally embedded process of social engagement, informed by the past\" (Emirbayer and Mische, 1998).The Mead theory of the past has received some attention in the discipline of sociology (Adam, 1990; Bergmann, 1992; Emirbayer and Mische, 1998; Flaherty, 1987; Flaherty and Fine, 2001; Luckmann, 1983; Urry, 1996). Of particular interest is an article by sociologists Maines et al. (1983) that proposes a four-part model to amplify the Mead theory of the past, as it applies broadly to human behaviour. Their model serves as the foundation for our own typology, which is applicable more specifically to the field of consumer behaviour and the concept of heritage brands.To date, Mead's theory of the past has received little attention in marketing and consumer behaviour literature. Notable exceptions include a discussion of the role of possessions in personal identity (Belk, 1990) and a literature review related to temporality in financial services (Gibbs, 1998). As such, we believe that the framework below offers an important new perspective on the dynamics of heritage brands.\n3. Applying Mead to corporate heritage brands: In this section, the sociological perspectives of the past articulated by Mead (1929, 1932) and Maines et al. (1983) are applied to the corporate heritage brand construct. Using this prior scholarship as a theoretical foundation, we have developed a conceptual model with four dimensions of corporate heritage brands. These will be discussed in turn by means of proffering an explanation, verification and illustration for each.3.1 Structural heritage\n4. Corporate heritage brands: a unifying framework: The four dimensions detailed above - structural heritage, implied heritage, reconstructed heritage, and mythical heritage - constitute a unifying conceptual framework for corporate heritage brands. This is shown in Figure 1, which includes a summary of descriptive attributes and provides a mnemonic sequence to assist recollection.The grouping of different types of heritage into an integrated typology suggests the need to explore the similarities and differences between these types. In this regard, several points deserve clarification.4.1 Heritage conveys permanence and can infer strength and stability\n5. Broadening the discussion: authenticity and nostalgia: The brand heritage phenomenon may usefully be considered relative to other historically-informed concepts in marketing and consumer behaviour, namely authenticity and nostalgia. These constructs have their own attendant literatures, but it can be argued that the Mead theory of the past offers a unifying structure that can also encompass these ideas.5.1 Authenticity\n6. Discussion: Our framework, along with the characterisation of corporate heritage brands detailed in this article, aims to clarify the internal logic of the brand heritage concept, and aims to build on the extant literature to provide a solid foundation for further scholarly research regarding the dynamics of corporate heritage brands. The framework also represents a tool that executives and consultants may use to recognise and classify different types of heritage phenomena.Our recent interactions with practitioners indicate that this model offers new insight into the management of brands with a heritage, and is particularly useful in formulating decisions about different communications approaches. Our typology clarifies the complex conceptual and vernacular landscape upon which discussions about older brands occur. Executives may use it to make explicit connections between the selection of specific signals that are intended to trigger associations on the one hand, and the cognitive and behavioural effects that they are hoping to induce on the other.For example, the model suggests that references to company founders will produce consumer associations with the firm and its products, and should be used to support utility claims such as reliability or trustworthiness. Similarly, the model suggests that references to contextual cues from historic eras are more likely to produce reactions that are affective or introspective, and should be used to engage consumers in more intuitive appeals. Differentiating these two approaches can have a meaningful influence on decisions about images or narratives that may be employed in corporate advertising and corporate communications generally.\n7. Conclusion: This article has demonstrated that the theory of the past articulated by George Herbert Mead (1929, 1932) and amplified by Maines et al. (1983) can be adapted to provide a unifying framework for the brand heritage concept. Our proposed typology is comprised of four dimensions denoted as structural heritage, implied heritage, reconstructed heritage and mythical heritage.Structural heritage refers to occurrences in the past that condition the future and provide precedent for claims by brands. Implied heritage refers to the past that is implied by the current status of brands. Reconstructed heritage refers to the remembrance of brands that were encountered earlier in the lifetimes of consumers. Mythical heritage refers to historical brand elements that are fictitious, and which facilitate the escape of consumers into imaginary scenarios. These four dimensions represent ideal types that are frequently interwoven.According to Mead, the interaction of the present and the past is integral to the process of identity formation. In our adaptation for brand heritage, the dimensions of structural heritage and implied heritage together form the innate heritage category, which concerns the identity of the brand itself. The historical characteristics of the brand serve as cues to influence buyer preference. In contrast, the dimensions of reconstructed heritage and mythical heritage together form the projected heritage category, which concerns both the identity of the brand and the identity of the consumer. This involves the projection of personal identity onto the brand or its associated products, and appropriation of the brand as an instrument of existential definition.Based on the findings of our article, the following alternative definition of corporate brand heritage effects is hereby proposed:Corporate brand heritage is an approach to corporate marketing that involves reference to the past. It encourages the engagement of consumers with the history of the brand, or the engagement of consumers with history through the brand. The former is operative in defining the identity of the brand alone, while the latter is also operative in defining the identity of the consumer. The corporate brand heritage concept encompasses a wide variety of historical references in contemporary corporate marketing, including interpretations and faux applications, provided that they relate to the brand and support the process of identity definition. A heritage brand is a specific variant of this phenomenon, differentiated by an organisational commitment to heritage and a strategic position that emphasises the past.This characterisation and the related framework clarify the internal logic of the corporate brand heritage concept, and provide additional foundation for further scholarly research regarding the dynamics of corporate heritage brands. The framework also represents a tool that executives and consultants may use to recognise and classify different types of heritage phenomena. As such, it contributes to both theoretical and practical endeavours related to the management of corporate heritage brands.\n", "role": "user" }, { "content": "Purpose: - The purpose of this paper is to explore the mechanisms of consumer behaviour relating to corporate heritage brands. The aim is to clarify the internal logic of the brand heritage concept, and to build on the extant literature to provide a solid foundation for further scholarship regarding corporate heritage brands.\nDesign/methodology/approach: - This conceptual paper adapts a theory about the role of the past in human behaviour, which was first developed by the pioneering sociologist George Herbert Mead, to the realm of consumer behaviour and heritage brands. The authors illustrate several principles in practice and develop a framework based on a taxonomy of heritage effects. The analysis offers clarification about the relationship of this framework to a variety of historically-related topics in prior literature, including authenticity and nostalgia.\nFindings: - This paper explains why and how corporate heritage brands appeal to consumers. The authors suggest that brand heritage encourages the engagement of consumers with the history of the brand, or the engagement of consumers with history through the brand. The former is operative in defining the identity of the brand alone, while the latter is also operative in defining the identity of the consumer.\nPractical implications: - Executives and consultants may use the framework to recognise and classify different types of heritage phenomena, and thereby develop more effective corporate communications for older companies.\nOriginality/value: - This is the first time that the Mead theory has been applied to corporate heritage brands.\n", "role": "assistant" } ]
Co-production strategy, retail competition, and market segmentation
[ "Market segmentation", "Marketing theory" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: Nomenclature\n1. Introduction: Co-production, which is defined as customers participating with firms in creating a product (Lengnick-Hall, 1996; Auh et al., 2007; Ahmad, 2016; Chaney, 2019), has become a critical business strategy (Bendapudi and Leone, 2003; Vargo and Lusch, 2004; Jung and Yoo, 2019). In general, co-production requires consumer involvement to provide value in a production process for goods or services. For example, a car buyer must drive, maintain and refit the car to obtain the desired benefits. From this perspective, a firm's offerings are intermediate goods that consumers must produce themselves for their own current or delayed consumption (Vargo and Lusch, 2004). With the implementation of a co-production strategy, the firms would not only consider customers as end consumers, while they turn to view customers to be creative participators in the production process. This strategy emphases on the company-consumer interface and suggest that a consumer's location becomes the locus of value creation. Product value is now co-created by customers who are actively participating in the product's forming for end-use, such as IKEA tables that require assembly or toy cars assembled with LEGO bricks. Especially in a distribution channel, the manufacturer not only provides intermediate kits that enable such customer participation, but also need to make decisions for the level of co-production as well as wholesaling price. Afterward, the downstream retailers make a delivery decision accordingly. Theoretically, it is also documented that the co-production process typically accompanies with newly co-created value by the consumers, which is called the \"IKEA Effect\" (Mochon et al., 2012; Norton et al., 2012), or the \"I Made It Myself\" effect (Troye and Supphellen, 2012). Specifically speaking, a consumer normally feels a product more valuable when it evolves their own time and effort and is willing to pay more than the original price of the product. With such new co-created value, the firms in a typical distribution channel are naturally facing the following questions:\n2. Literature review and theoretical background: This study is related to three streams of literature, especially the quantitative study. The first is the literature on the mechanism of co-production. In an empirical study, Mochon et al. (2012) found that feelings of competence associated with self-produced products lead to increased valuation among consumers. An experiment by Troye and Supphellen (2012) demonstrated that branded input products might benefit from an evaluation bias caused by the \"I made it myself\" effect. Lan and Rajneesh (2013) revealed that firms not only provide value to consumers but also realize higher profitability when they correctly identify consumers' co-production motivation and provide proper pricing for service options. Syam and Pazgal (2013) studied optimal cooperative productive efforts and pricing strategies, considering co-creation with production externalities. Atakan et al. (2014) highlighted the importance of encouraging consumers to physically participate in the production process during the realization stage, rather than the design stage. Ranjan and Read (2016) identified the value of co-creation in the production process and identified two primary components of consumers' willingness-to-pay: co-production and value-in-use. The former refers to co-creation value, while the latter refers to the value generated from the functional attributes of the product and its use for a specific purpose. Moreau et al. (2011) found that consumers place a higher value on their own time and effort and have a higher willingness to pay when customizing for others than for themselves. Troye and Supphellen (2012) experimentally proved that branded input products benefit from the evaluation bias caused by the \"I made it myself\" effect. Chaney (2019) qualitatively studied crowd-funding, which is considered as a specific case of co-production. Jung and Yoo (2019) empirically examined the moderating role of customer co-production in the effect of customer indebtedness on service friendship. This study complements these empirically works by modeling a closed-form game for the co-production strategy in a typical distribution channel, which includes a manufacturer who can manipulate the strength of co-production, two competing retailers with different service abilities, and heterogeneous consumers on product preferences. Through this study, the lens of studying on co-production could be extended from a single firm to multiple firms, especially for tracking each firm's incentives on co-production from a channel's perspective.\n3. Model: We consider a consumer market with a single manufacturer and two types of potential retailers who differ in their service quality levels. The detailed assumptions of the model are as follows.\n4. Analysis: 4.1 Case 1: without co-production (ps=0)\n5. Discussion: More manufacturers are utilizing co-production strategies to induce consumers and improve profitability (Stevens et al., 2017; Xie et al., 2008). The academic literature in marketing or operation management also emphasizes the importance of co-production for consumers and firms, and several empirical studies have examined the phenomenon of increasing consumer valuation in the co-production process (Mochon et al., 2012; Norton et al., 2012). Unlike extant studies on co-production, we build a stylized analytical model to examine the impact of co-production on the market dynamics from the perspective of a manufacturer who chooses a co-production strategy, and the downstream retail competition, as a common market setting. We show that market performance and the utility of all members can be improved, mostly in the presence of retail competition.\n6. Conclusion and research implications: In this study, our results are of value for manufacturers managing distribution channel and customer segmentation tied to co-production strategy. With the increasing interest of academics, co-production has been examined in different ways and from different domains, such as marketing (e.g. Etgar, 2008; Ranjan and Read, 2016; Jung and Yoo, 2019), psychology (e.g. Mochon et al., 2012; Norton et al., 2012; Ahmad, 2016) and operations management (e.g. Silveira, 2011; Karmarkar and Roels, 2015; Nicola et al., 2016). However, in this study, we extended the literature streamline of co-production from three aspects. First, we found that the performance of a typical channel with a manufacturer and two competitive retailers can be improved as the manufacturer incorporates co-production into the products. As a result, interestingly, the channel with products needed co-production can segment consumers and bring the low-quality retailer back to the market. Second, we found that even though the manufacturer's co-production strategy always leads to more fierce competition, the consumer's co-created value might benefit all the parties in a distribution channel eventually, thus the manufacturers are motivated to take this strategy. Previous literature provided empirical evidence as above mentioned, however, the specific analytically investigation is missing, and we filled this niche by this study. Finally, we revealed that when the retailers are not highly differentiated in quality, the manufacturer's co-production strategy can also potentially benefit both of the retailers. In other words, by implementing co-production strategy, the competing retailers could capture a part of consumer s' surplus from the manufacturer, this is a new insight for the channel management.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to examine the effect of manufacturers' co-production strategy on market segmentation and channel performance under retail competition.\nDesign/methodology/approach: It differs from previous empirical studies by primarily focusing on the increment in consumer value accompanying co-production. The authors establish a game-theoretical model to analyze the impact of co-production on market segmentation and the profitability of channel members in a competitive retail environment.\nFindings: The results reveal that manufacturers introducing co-production expand market coverage and benefit all channel members, when the intensity of competition is sufficiently high, especially for retailers with low-quality levels, who are out of the market without co-production. Furthermore, with the increase in customer valuation through co-production, employing a co-production strategy is always a dominant strategy for manufacturers.\nResearch limitations/implications: First, although the authors assume a monopoly manufacturer and two duopoly retailers, adding competition between manufacturers should enrich the model. Multiple products with vertical or horizontal differentiation could also be introduced into the model. Second, the authors use the multiplicative utility function to model the value co-creation effect on consumers; however, different utility functions may yield significantly different results and implications. Third, the authors model a one-shot game in a single product selling period; future studies may employ multi-period games to obtain further insight into co-production strategy. Finally, the model assumes that all consumers are homogenous in the extent of value creation and hassle cost. Future research may find it interesting to consider heterogeneity in these characteristics.\nPractical implications: The business world today already sees the power of leadership in a supply chain to have shifted from manufacturers to retail giants such as Walmart, Home Depot and Best Buy. The findings also propose a new route to counteract the emergence and rise of dominant retailers. On the other hand, with the application of new technology in the retail industry such as 3D avatar, AR/VR, Internet of Things, consumers are more likely to participate in various forms of co-production activities, how to execute the co-production strategy has become more and more important for managers.\nSocial implications: The conclusion of this study points out the way to achieve a win-win outcome under which both channel members including manufacturer and retailers and consumers can be better off, that is, the channel can reach Pareto improvement, so the social welfare is increased accordingly.\nOriginality/value: The authors propose an analytical framework to examine the effects of co-production and competition on market segmentation and profitability, and prove that co-production is a powerful marketing tool that can attract consumers and increase profitability, which manufacturers can incorporate into their products even in a competitive environment.\n", "role": "assistant" } ]
Okinawa as dystopia: panoptic monitoring systems in schools
[ "Dystopia", "Asymmetric interactions", "Bureaucratic obedience", "Okinawa", "Panoptic monitoring system", "Pressure on schools" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: A dystopia refers to an imagined undesirable futuristic society that is characterised by pessimism and fear (Farnell, 2017; Nordensvard, 2014). Dystopias were first portrayed in novels and films, describing a society that has negatively developed, with special reference to technological and scientific advancements (Barrish, 2016; Claisse and Delvenne, 2015). Nowadays, dystopias are discussed in social sciences - not just as a genre in novels - and are even used as a key concept to analyse recent political movements and actors' perspectives (Murnane, 2018), or issues of poverty as observed in developing countries and the gaps between them and developed countries (Papastephanou, 2016). This is because dystopias can suggest, in general, the end of our contemporary societal system, which leads us to question what would happen after such a crisis (Nordensvard, 2014).\nTheoretical frameworks: Two theories constitute the framework of this study to investigate how the panoptic monitoring system has emerged in Nago. The first is asymmetry theory, which was developed by Womack (2006) to investigate the relationships between \"stronger\" and \"weaker\" states. Although Okinawa is a prefecture of Japan, its relationship with the central government is highly resonant with the ones that Womack (2006) studied, in addition to its history as an independent state. The second framework is called \"the teacher as a colony\" (Saito et al., 2018). This framework refers to teachers as subjects who are imposed with certain sets of policies and categorises the teachers' attitudes towards such policies. At this time, \"the teacher as a colony\" has been expanded to deal with actors in bureaucratic institutions on their agentive responses towards policies.\nMethods: This study employed a collective self-study method (Samaras, 2011; Samaras et al., 2012) in which the authors conducted interactive examinations of issues. The authors decided to employ this method because it aims to investigate the lived experiences of the first author's own practices as a clinical psychologist in the schools of the area. The first author observed the pressures on various stakeholders due to the emergence of panoptic monitoring systems in the schools and their influence on the local community and decided to investigate this issue by using the self-study method. Self-study was developed as a method for researching teacher education (Samaras, 2011), and nowadays, its application includes policymaking (You, 2011) and other fields like medicine (Robbins and Sarkany, 2017). Because of the sensitivity of the situation, the authors decided to examine the first author's lived experiences rather than any other participants in the community.\nCase: This section is divided into four parts: the first comprises explanations on historical and political Okinawan contexts; the second is an explanation on the recent political conflicts in Nago city, where the study was conducted, over the construction of a new base for US Marines; the third discusses issues on local educational administrations. The nature of the administration here is depicted as highly obedient towards Tokyo's policies, and even furthering them, despite engaging in confrontational arguments about the construction of the new base; the fourth is an explanation on how these political tensions and administrative pressures would impact the schools.\nDiscussion: The aim of this study was to investigate how a panoptic monitoring system has emerged in Nago schools. Regarding the new port construction issues in Nago, people's remarks and behaviours are monitored within the community and by the authorities, and this situation matches with the fundamental suppressive nature of a dystopia (Roberts, 2013). In Nago's case, the governmental selection of Henoko as the site for the new base created a schism in the community, and their introduction of compensations led to the establishment of communal panoptic monitoring system to pressure each other not to embarrass those who receive the compensations.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to investigate a dystopian situation with special reference to how a panoptic monitoring system emerges in schools. To satisfy this aim, there will be a close analysis of the city of Nago in Okinawa Prefecture, where there is a huge debate over the new US Marine base construction and how it greatly influences people's lives.\nDesign/methodology/approach: This study will employ a self-study by the first author, who is a clinical psychologist under the board of education in the city. This self-study aims to examine the lived experiences of the author based on interactions with critical friends.\nFindings: The government's selection of the site for the new base created a schism in the community, and the introduction of compensations led to the establishment of a communal panoptic monitoring system. This communal panoptic monitoring largely influences the relationships between pupils, teachers and parents. Further, another panoptic monitoring system has developed inside the Nago schools due to the intensification of the assessment policies given by the ministry in Tokyo.\nOriginality/value: This investigation purports to analyse a dystopian situation with special reference to how a panoptic monitoring system, a key element of a dystopia, emerges in schools.\n", "role": "assistant" } ]
Disentangling value-enhancing and cost-increasing effects of knowledge management
[ "Knowledge management", "Innovation performance", "Financial performance", "Belgium", "Value analysis", "Cost effectiveness" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: A sustainable competitive advantage may not only result from tangible assets and resources, but also from knowledge that can be transferred, aggregated and appropriated and that is difficult to imitate (Grant, 1996; Teece et al., 1997; Wernerfelt, 1984; Earl, 2001). It is therefore no surprise that firms have developed several techniques to manage the generation and sharing of knowledge within the firm and that their investments in these techniques have been increasing dramatically (Mills and Smith, 2011). Also the scientific research on knowledge management (KM) has developed greatly in the past decade, resulting in important conceptual frameworks and theoretical models (Zack et al., 2009).Empirical evidence on how KM techniques influence firm performance is however mixed. Some studies demonstrate a positive impact of KM techniques on product innovation performance (e.g. Darroch, 2005; Czarnitkzi and Wastyn, 2009) and organizational performance in general (Zack et al., 2009; Mills and Smith, 2011). It is assumed that by affecting innovation performance and organizational performance in general, KM techniques will have a positive effect on the firm's financial performance. However, with the exception of Marques and Simon (2006) and Zack et al. (2009), extant empirical work (e.g. Darroch, 2005; Kalling, 2003) does not confirm this. Several causes for this lack of empirical clarity exist. Firstly, there have been few large-scale studies on the relationship between KM and organizational performance. The studies that have been published (for an excellent overview see Zack et al., 2009) use relatively small samples. Secondly, with the exception of the work by Zack et al. (2009), they aggregate a broad set of financial and non-financial performance indicators, making it impossible to study the financial performance effects in particular. In addition, the existing studies all rely on self-reported performance measures which differ between studies and can therefore lead to divergent findings. Finally, these studies measure the use of KM techniques and firm performance at the same point in time, making it impossible to establish whether KM techniques drive firm performance or vice versa.In this article, the authors do not intend to question the positive indirect relationship between KM and financial performance. However, they argue that the simultaneous adoption of a wide variety of KM techniques also triggers costs that might negatively influence the financial performance of the firm. For instance, some scholars (e.g. Choi and Lee, 2003; Williamson, 1979) have suggested that techniques aimed at internal knowledge creation and sourcing of external knowledge are costly and time-consuming. The purpose of this study is therefore to:* empirically disentangle the value-enhancing and cost-increasing effects of KM techniques on financial performance;* for a large sample of companies in a variety of industries;* using accounting data as opposed to internally reflective performance measures; and* while controlling for the possibility of reverse causality.The authors rely on a sample of 705 innovative Flemish manufacturing and service firms. The data on the use of KM techniques and innovation performance is collected via the Community Innovation Survey 2007 which covers the years 2004-2006. This is complemented with information on financial performance from the annual accounts. The structural equation analysis generates two important findings. The authors find empirical confirmation that the use of KM techniques has an indirect positive impact on financial performance via product innovation performance. However, they also observe a direct cost-increasing effect of the range of implemented KM techniques on financial performance. Moreover, the analyses suggest that, within the observed time frame, the direct cost-increasing effect of KM techniques is larger than the indirect value-generating effect of these techniques, resulting in an aggregated negative effect of KM techniques on financial performance.This article consists of four sections. First the conceptual framework is introduced, hypothesizing on the value-enhancing and cost-increasing effects of the range of implemented KM techniques on financial performance. Subsequently, the methodology is discussed. Next, the results of the analyses are presented. Finally, the authors point to the main theoretical and managerial implications of their findings, they discuss the study's main limitations, and suggest interesting avenues for future research.\n2. Conceptual framework and hypotheses: The purpose of this article is to disentangle the value-enhancing and cost-increasing effects of KM techniques on financial performance. In order to do so, the authors developed a conceptual framework that is presented in Figure 1. This framework provides particular hypotheses on the connections between the range of KM techniques used by the firm, its other innovation efforts, product innovation performance, and financial performance.2.1 The direct impact of knowledge management techniques on product innovation performance\n3. Methodology: 3.1 Sample\n4. Results: 4.1 Descriptive statistics\n5. Discussion and conclusion: 5.1 Implications\n", "role": "user" }, { "content": "Purpose: - The main purpose of this paper is to provide large-scale empirical evidence on the value-enhancing and cost-increasing effects of knowledge management (KM) techniques.\nDesign/methodology/approach: - The authors conduct structural equation analyses, using data from the Community Innovation Survey 2007 and from annual accounts of 705 innovative Belgian firms.\nFindings: - Results confirm that the use of KM techniques has an indirect positive impact on financial performance via increased innovation performance. In addition, a direct cost-increasing effect of KM practices on financial performance is observed. In the short term, this direct cost-increasing effect exceeds the indirect value-generating effect of KM techniques.\nResearch limitations/implications: - This study investigates the short-term effects of KM techniques. Future research should study the long-term costs and benefits. Data were collected in Belgium and may not reflect the impact of KM practices in other geographic, economic or cultural settings.\nPractical implications: - The findings clearly indicate that the implementation of KM techniques entails significant costs. Within a two-year time frame, the financial costs of KM techniques are more visible than their potential benefits. An exclusive focus on the short-term implications of the use of KM techniques is hence likely to give a too pessimistic view on their potential financial contribution.\nOriginality/value: - This article is the first large-scale study that disentangles both the value-enhancing and cost-increasing effects of KM techniques on financial performance and that uses time lags and accounting data (as opposed to self-reported performance measures) to do so.\n", "role": "assistant" } ]
An evaluation instrument and strategy implications of service attributes in LOHAS restaurants
[ "Service quality", "Kano model", "LOHAS restaurant", "LORSERV scale" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: In recent years, with the rising awareness of environmental protection and health and the development of experiential marketing, the concept of LOHAS (lifestyles of health and sustainability) has been adopted in the hospitality and leisure industries. LOHAS, a concept proposed by Ray and Anderson (1998), can be interpreted as a living model that considers physical and psychological health and sustainable development of the environment. According to the estimation of Deloitte in 2014, in the upcoming 30-40 years, nearly 30 per cent of consumers of the Millennium era worldwide will create a LOHAS output of USD 30 trillion. Many restaurant operators have integrated the concept of LOHAS with their restaurant operations. In addition to offering existing delicacies and services, they intend to provide customers with deeper experiential value and an internal sense of happiness. Consequently, the services of a LOHAS restaurant are different from those of a traditional restaurant. Aside from food and beverages and basic leisure services, a LOHAS restaurant integrates sensational elements in the hearts of customers to create a joyful atmosphere and memorable experiences for customers' bodies and their mind's content (Cheng and Chen, 2016).\n2. Literature review: 2.1 Lifestyles of health and sustainability restaurants\n3. Methodology: 3.1 Research process\n4. Results: 4.1 Qualitative process of LORSERV scale development\n5. Discussion and conclusion: 5.1 Conclusion\n", "role": "user" }, { "content": "Purpose: This study aims to develop a comprehensive LOHAS (lifestyles of health and sustainability) restaurant service quality scale by using a rigorous qualitative and quantitative research process to effectively measure the service quality of LOHAS restaurants. Moreover, this study aims to further identify the Kano quality characteristics and strategic meanings of service attributes in LOHAS restaurants.\nDesign/methodology/approach: This study designed the preliminary items of the service quality scale for LOHAS restaurants (LORSERV scale) based on relevant literatures and expert interview procedures. This study identified the goodness of fit of the questionnaire content, construct validity and validity of the LORSERV scale using exploratory factor analysis and confirmatory factor analysis. The moderated regression was conducted to identify the Kano quality characteristics and strategic meanings of each service attribute in LOHAS restaurants.\nFindings: The results indicated that the LORSERV scale included seven dimensions (internal sense of happiness, transitiveness, environment, healthy catering, service commitment, green practicability and thoughtfulness), for a total of 33 items. According to the results of the Kano model, the seven service attributes were categorized into the attractive quality. A total of 25 service attributes were categorized into the one-dimensional quality, and one service attribute was categorized into the must-be quality.\nOriginality/value: The contribution of this study is that the scale could facilitate operators of LOHAS restaurants to effectively understand customer perceptions of service quality and serve as a reference to upgrade and improve service quality. The identification of Kano quality characteristics for each service attribute is conducive for LOHAS restaurants to understand the strategic meanings of each service attribute and can serve as a reference to make distinctive service strategies to reach sustainable operations.\n", "role": "assistant" } ]
Analyzing the usability of travel web sites in Hainan, China: The perspectives of e-buyers and e-lookers
[ "Travel", "China", "Performance management" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nBackground: The success of a web site, in addition to providing sufficient information (functionality), largely depends on its ease of use (usability). Broadly speaking, usability refers to the ease of learning or using a user interface (Nielsen, 2000). In other words, it is of paramount importance that users can use a web site in an efficient and pleasant way. Although tourism researchers have shown the applicability of functionality and usability in different areas (Au Yeung and Law, 2006; Liang and Law, 2003), prior studies have not investigated the difference in perception of usability between e-buyers and e-lookers.According to Internet World Stats (2007), China had 162 million internet users in June 2007, which was the second largest in the world after the US (with 211 million users). Still, the internet penetration rate in China was only 12.3 per cent in June 2007. In other words, there are plenty of opportunities for further development in this promising market. To gain business benefits from this potentially huge market, international businesses have been, and will be, launching their aggressive China expansion plans (Hotel Asia Pacific, 2007). Nevertheless, the existing tourism literature has a very limited number of published articles that examined the behaviors of travel web site users in China.In view of the absence of prior studies on analyzing usability performance in the perspective of e-buyers and e-lookers in general, and particularly in China, this study makes an initial attempt to fill in this void by investigating the issue in Hainan, one of the most popular travel destinations in China. The findings would thus be of interest to tourism practitioners, academic researchers, educators, and students to better understand the perceptions of potential consumers in Hainan about China-based travel web sites.\nMethodology: A structured questionnaire was developed in this study that examined the perceptions of e-buyers and e-lookers on travel web sites in China. The questionnaire used in this study was modified from the instrument that was used by Law and Ngai (2005) to analyze Hong Kong travelers' view on usability of travel web sites, which was built on the basis of Abeleto's usability hazards index (Sprenger, 1999). The modification was made primarily to reflect the needs of Chinese travelers. Since the questionnaire was to ask respondents in China, the original English questionnaire was translated into Chinese. A back-translation process was used to ensure the resemblance of both versions of the questionnaire by three academic researchers who are all bilingual speakers of English and Chinese. Due to budgetary constraints, the data collection was conducted in Hainan instead of all representative regions in the country. To ensure the content validity and readability, the translated questionnaire was pilot tested in November 2007 by 30 residents in Hainan who had previously visited at least one China-based travel web sites that were owned and managed by local enterprises in China. A few minor adjustments on wordings were then made in response to the suggestions received from the pilot test.The final version of the questionnaire comprised two major sections. At the start, the questionnaire had a qualifying question, which asked whether the respondent had visited any China-based travel web sites in the past 12 months. Qualified respondents were then asked to proceed to the first section, which consisted of 24 usability attributes, which were grouped into five dimensions. Their perceptions of usability attribute were measured by a five-point Likert scale (from 1=very poor to 5=very good). The second section had eight questions that were related to respondents' demographics. Finally, all respondents were asked whether they had previously made purchase(s) on any travel web sites in the past 12 months. Their responses to the last question were used to distinguish between e-buyers from e-lookers.The large-scale survey was carried out in Hainan province using a convenience sample in December 2007. A team of 30 university student helpers was recruited and trained to get familiar with the questionnaire and to conduct personal interviews. Using a quota sampling approach, each student was required to collect 20 completed questionnaires within two weeks. During the process, the students were reminded about the ethical issues when conducting research. Additionally, the students were instructed to inform their contact persons that no one could fill in more than one questionnaire. At the end of the survey, 600 questionnaires were received but 111 of them were discarded due to the missing entries. The remaining 489 usable questionnaires, representing 81.5 per cent of the received questionnaires, were further analyzed.\nFindings and discussion: Demographic characteristics of the respondents\nPractical implications and conclusions: Findings of this study have offered a couple of important managerial/practical implications. First, the demographics of e-buyers and e-lookers matched prior studies that were pertained to the topic of the characteristics of online purchasers and online browsers in other regions. As such, the difference in perceived usability performance as compared to Law and Ngai's (2005) study on Hong Kong-based web sites was largely due to the fundamental design of the web sites instead of the difference between users in different regions. This general view highlighted an essential signal that owners of travel web sites in China should pay attention on improving the usability of their web sites, in general and in particular on dropping distractive animations, scrolling text or marquees as this attribute was viewed as somewhat poor.More importantly, none of the included attributes or dimensions exhibited any significant difference between e-buyers and e-lookers. Tourism practitioners in Hainan should concentrate on using their limited resources to improve their web sites instead of converting e-lookers into e-buyers. Some tourism researchers have advocated the critical importance of converting e-lookers into e-buyers (O'Connor, 2004; Susskind et al., 2003). Drawing on the findings of this study, such an attempt of conversion is less important, if applicable, to Hainan's present tourism industry. What needs immediate attention for tourism practitioners in Hainan, and likely in China, apparently, is to find ways to improve the usability of their web sites should they want to compete with international travel web sites.In summary, respondents were not very satisfied with the design and ease of use of the web sites. It is generally agreed that meeting consumers' needs is one of the primary factors for business success at present (McIntosh et al., 1995). This study, therefore, makes a contribution to China's tourism industry by providing useful information about the perceptions of users in Hainan on China-based travel web sites' usability.\n", "role": "user" }, { "content": "Purpose: - This study sets out to investigate the perceptions of e-buyers and e-lookers in Hainan, China on usability performance of China-based travel web sites, an emerging sector of the tourism industry in China.\nDesign/methodology/approach: - Data collection was conducted in Hainan, a province in Southern China, in December 2007. A structured questionnaire was used to collect the views of users on different usability dimensions and attributes on China-based travel web sites.\nFindings: - Empirical findings indicated that respondents who only viewed China-based travel web sites had an average usability performance. More importantly, there was no significant difference between e-buyers and e-lookers.\nResearch limitations/implications: - The major limitations of the study are the non-random selection of respondents and that the respondents were from one province in China.\nOriginality/value: - With the views of 489 respondents, research findings should provide an overview for tourism practitioners to allocate their limited resources on e-business development appropriately.\n", "role": "assistant" } ]
Prioritization of dimensions of visual merchandising for apparel retailers using FAHP
[ "Visual merchandising", "Product presentation", "AHP", "Web store elements", "Web site interface", "Visualization technique", "MCDM technique", "Apparel", "Retailing", "Online shopping", "Customer engagement", "E-commerce" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Non store retailing, specially electronic retailing has garnered considerable attention owning to its promising growth potential (Fernie et al., 2010; Liao et al., 2017). Due to the ever rising number of electronic retail stores, consumers are presented with numerous choices (Kumar and Dash, 2015; Kim, 2019; Sarkar et al., 2020). With no additional switching cost, the customers with just a click can move from one website to another, thus challenging the retailers to hold customers' attention (Chen et al., 2010; Kahn, 2017). Webpage on which consumer spends more time is deemed more important and interesting by the consumer (Mihith et al., 2018). The chances of converting visit into sales are higher on such websites (Panagiotelis et al., 2014). One of the ways of keeping online customers engaged is by creating interesting and informative web stores (Yi et al., 2015; Apenes Solem, 2016). Such web stores become motivators for increasing sales by prolonging the customer's stay (Flavian et al., 2017; Friedrich et al., 2019). Web stores are important touch points for customer engagement and visual merchandising plays a crucial role in this (Friedrich et al., 2019; Connell et al., 2019).Earlier studies have established the significance of layout and design in online fashion retailing (Then and DeLong, 1999; Lorenzo-Romero et al., 2013). Online store design and environment can evoke positive or negative feeling that will influence customer's subsequent beliefs, purchase intention and behaviors (Wu et al., 2014). Environmental cues might be more impactful than other marketing cues with regard to online retailing (Lorenzo-Romero et al., 2013). Online store environmental cues comprises of many elements such as website appearance, merchandise presentation, navigation etc (Kim et al., 2009).Effectiveness of information content has considerable influence on online satisfaction and consumer's purchase intent (Singh et al., 2016; Liu et al., 2017). Information display techniques are linked to providing customers with better store experience (Kim, 2019). Product presentation is of prime importance as it shoulders the responsibility of conveying the attributes, characteristics and appeal of a product that cannot be physically assessed in absence of touch in online medium, neither measured nor can be tried directly (Eroglu et al., 2003; Flavian et al., 2017). Studies have also established that if a detailed description of the functional characteristics of the product is provided along with superior visualization, it can motivate the consumer to revisit the store by evoking a sense of trust which is an important antecedent for online purchasing (Hubert et al., 2018). For superior presentation of product information, retailers use a number of technologies such as 2D and 3D picture of the merchandise, video, verbal description of the merchandise and its attributes etc. (Flavian et al., 2017). Thus, e-tailers make continuous efforts and develop strategies to accentuate and improvise the visual display of products on the web store in design terms called as Online Visual Merchandising (OVM) (Park et al., 2009).Studies in the past have covered various facets of online visual merchandising. Some studies identified the dimensions of OVM through content analysis of existing websites (Khakimdjanova and Park, 2005; Ha et al., 2007) while others were focused on creating an analogy with physical store visual merchandising elements (Eroglu et al., 2003). Researchers have also focused on identifying the influence of merchandise presentation elements on customers understanding of the merchandise, their perception about the merchandise, shopping indulgence and flow (Yi et al., 2015). Several visualization and product presentation techniques have been studied in isolation and their importance in customer buying behavior is documented. There are also studies which have compared two elements against each other to establish their relative importance. But such studies are very few and usually only two elements are compared. The retailers have several visual merchandising (VM) dimensions to choose from. It is difficult, costly and at times not possible to give equal importance to all these dimensions while designing the web store and displaying the product information. It is therefore crucial to understand the relative significance of these dimensions. However, there is scarcity of literature establishing the relative importance of online visual merchandising dimensions against each other. If the retailers could have the dimensions prioritized, they can emphasize on the more important dimensions and it will help in making web stores more effective, impactful and interesting.This study aims to bridge this gap by fulfilling the following objectives:RO1. To identify and categorize the dimensions of online visual merchandising,RO2. To prioritize and analyze the identified dimensions of online visual merchandising.RO3. To check the consistency of the prioritized dimensions through sensitivity analysis.The evaluation of OVM dimensions is complex as multiple variables are involved and each one has some influence on the buyer's decision making process. In order to evaluate the dimensions, there are diverse techniques available such as AHP, TOPSIS, PROMETHEE etc. (Kumar et al., 2018). Analytical Hierarchy Process (AHP) is a suitable technique for finding solutions for problems involving many attributes (Saaty, 1980). It is a widely used technique in various research fields for obtaining criteria weights or preferences of alternatives (Kabir et al., 2014; Kumar and Dash, 2014). It allows using both qualitative as well as quantitative criteria and the ordered fashion of the decision making allows back tracking and ensures the quality of decision making process through consistency indices (Tscheikner-Gratl, 2017).However, the human judgment about the value of alternatives is subjective and not precise. To overcome this limitation researchers blend fuzzy theory with AHP (Gopalan and Satpathy, 2015). In case of online retailers, a retailer has to choose from a number of elements and decide which ones to use for making the store most effective. The decision is subjective and not precise. In such cases, it would be beneficial to apply fuzzy AHP technique and identify the elements deemed important for product information display.After undertaking an extensive review of literature, consultation with experts in the field and after regrouping, four major dimensions of Online Visual Merchandising are identified which includes \"Pictorial Presentation\", \"Customization\", \"Product Information\" and \"Web Store Environment\". These dimensions further comprise of 16 sub-dimensions. This study then develops a fuzzy evaluation model based on fuzzy AHP which prioritized the relative weights of OVM dimensions.The remaining paper comprises of seven more sections. Section 2 covers literature on various OVM dimensions and their grouping for this study. Section 3 is research methodology while section 4 is dedicated to data analysis. Section 5 presents the discussion of the study. This is followed by section 6 which provides the conclusion and implications of this study. Last is section 7 covering limitations and future research directions.\n2. Review of literature: 2.1 Literature search strategy\n3. Research methodology: In online shopping environment, apparel products lack tactile qualities. Out of the five human senses: sight, hear, smell, taste and touch, \"Sight\" is used by consumers for doing the evaluation of products while shopping online (Park et al., 2008; Kahn, 2017). One of the challenges faced in online retailing is the lack of effectiveness of presentation of product information to the consumer resulting in lack of product understanding (Park et al., 2008; Kim, 2019). Since consumers cannot touch and feel the products, the technology used for displaying the product information to the consumer should be carefully selected (Yoganathan et al., 2019). For doing this, it is important to first identify the product presentation dimensions and then understand the prioritization order of these dimensions and their sub-dimensions.For carrying out this study, interview schedule was designed. Twenty experts were interviewed for filling the pair wise comparison sheet. The experts profile included subject matter specialist, academicians, consultants and senior industry professionals. Only those experts were interviewed who are working in the domain of e-commerce, apparel and retail for at least 10 years. The detailed profile of the respondent is given in the respondent sheet in Appendix for reference. Experts were interviewed in two phases. The objective of the interview was twofold: In 1st Phase: To get consent on the dimensions of visual merchandising identified from literature and categorize them. In 2nd Phase: To know the relative importance of these dimensions and sub dimensions with respect to each other using pair wise comparison matrix.Literature review and expert discussion resulted in identification of four dimensions and sixteen sub dimensions. FAHP technique was then utilized to prioritize and evaluate the identified dimensions of OVM for fashion and apparel retail stores (Refer Figure 2). 3.1 Fuzzy AHP approach\n4. Data analysis: 4.1 Building the hierarchy framework\n5. Discussion: Amongst the four dimensions, it was found that Pictorial Presentation dimension is ranked highest with the weight of 0.339 closely followed by Product Information at rank two in order (0.313), followed by Presentation Customization dimension (0.210). Web store environment (0.138) dimension is ranked lowest. This implies that Pictorial Presentation dimension is considered the most important dimension of OVM for fashion and apparel retailers. The result is in alignment to the previous studies which have put a lot of emphasis on visualization technique.Researchers and practitioners have for long tried to evaluate the effectiveness of delivering the product related information through visual cues and textual cues. Studies have produced contrary results. Some studies state that information presented in the text form, covering aspects such as product description, color, size etc has far stronger effects on inducing positive consumer attitudes than Pictorial Presentation of the product (Kim and Lennon, 2008). While there are other studies which suggest that pictorial presentation has more influence on buying performance of the consumer and is superior to textual presentation(Blanco et al., 2010). Results of our research support that pictorial presentation is considered more important and requires more attention of retailers while designing the web store for fashion and apparel products. However, Product Information dimension closely follows the Pictorial Presentation dimension. So in case of some products, Pictorial presentation may suffice while for other products, having some additional information might help in taking final decision.Customization dimension is ranked at third place in order of preference. From this lower rank it can be inferred that if a retailer is using sophisticated visualization technologies, they create immersive buying platform that heightens the customer's experience making online shopping spaces more realistic and closer to the real world context and in such a scenario even if a little less customization is provided, consumer will still be willing to purchase the product.Web store environment ranks last. These are considered as low task relevant cues in online retailing. For items such as Apparel which are considered hedonic, they hold less significance. The customers feel engaged when they are exposed to high task relevant cues and find such cues as over emphasized background, color and music deterrent to shopping. This is especially true for the price conscious consumers who are more interested in finding the product information rather than seeking enjoyment (Sarkar et al., 2020) The sub dimension \"no. of images\" has got the highest weight in global weight. \"Presentation mode\" stands at rank two followed by \"size information\" and \"product description\" at third and fourth rank respectively. Past studies have also pointed towards placing more emphasis on presentation mode and using multiple images so that the garment can be captured from different angles and we get the whole view of the garment which will ultimately reduce the perceived risks of purchasing a fashion product online. Alterable background sub dimension holds last position. It is not considered significant in influencing online shopping.\n6. Conclusion and implications: As the world is getting impacted by epidemics very frequently, social distancing is becoming a new norm which will push more and more consumers to shift their buying habits from physical stores to online stores. At this juncture, visual merchandising can play a crucial role in consumers' web store selection process. Consumers will prefer those web stores which are more informative and interesting. Visual merchandising is enabling retailers to fulfill consumers' requirement for sensory, enjoyable and informative experience while doing online shopping. The elements extracted from the literature and after discussion with experts are prioritized based on FAHP to provide the best suited elements for the online stores. From the analysis it is concluded that \"Pictorial Presentation\" dimension has scored the highest weight and amongst the sub-dimensions \"Number of product images\" has scored the highest weight and hence are most important for online store presentation. Further, sensitivity analysis reinforces the stability of the ranking. The study conducted here could aid both academia and industry for further advancement in the field of OVM. Retailers using these results can create the ideal engagement ecosystem which will help in making online retail stores more interesting for the consumers. The findings of this study lead to several theoretical and managerial implications which are discussed in the sub sections below. 6.1 Theoretical implications\n7. Limitations and future research directions: The study carries many implications and is significant in retailing world, yet, this study also has certain limitations. The study is focused on only apparel as the product category; therefore, the results may be applicable to this category and may not be generalized for all product categories. The characteristics and nature of different product categories is different. So it is possible that for a different product category the importance of these visual merchandising elements may also vary. The dimension which holds the highest weight and importance for apparel category may not hold so for other categories. There are several future research directions which emerge from this study. Future researchers can replicate this study for other product categories. This will help in understanding if there is any major difference between the dimensions prioritized for evaluating an apparel product which is an experiential product and dimensions prioritized for other product categories. Cross cultural study could be another interesting dimension which can be explored. Studies could be carried out to draw comparison between preference amongst consumers of two different nations or regions. Researchers can also do similar studies using a larger sample size. Research can also be carried out by using consumers as sampling unit instead of expert based study and then draw a comparison between experts study and consumer study. Researchers can also explore other techniques of comparing and ranking the visual merchandising dimensions. Another potential area of research could be development of optimization models for OVM dimensions. These models can be used by stores to optimize their visual merchandising according to the target groups.\n", "role": "user" }, { "content": "Purpose: Visual merchandising has a direct impact on shopping experience, making it pertinent for the retailers to ensure that their store environment has an effective design. This study blends fuzzy numbers and Analytical Hierarchy Process to create a fuzzy evaluation model prioritizing the relative weights of visual merchandising dimensions of online fashion apparel store. This study will help the fashion retailers in creating more engaging and informative online stores leading to subsequent increase in online retail sales.\nDesign/methodology/approach: Visual merchandising dimensions and sub dimensions used by retailers for online stores were identified through review of literature and discussion with experts. Twenty experts were interviewed and their responses captured. The responses were analyzed using Fuzzy AHP technique resulting in prioritizing the dimensions according to the weight. For testing the stability of the results, sensitivity analysis was conducted.\nFindings: Four key dimensions and sixteen sub dimensions were extracted. Weights were calculated using FAHP. \"Pictorial Presentation\" has the highest weight followed closely by \"Product Information\" dimension. Amongst the sub-dimensions \"Number of Product Images\" scored highest followed by \"Product Presentation mode\". \"Alterable Background\" scored the least weight.\nOriginality/value: Previous studies have identified Online Visual Merchandising dimensions and have also studied the impact of some of these dimensions on consumer buying behavior, but the relative significance has not been determined yet. This study fills the gap.\n", "role": "assistant" } ]
Patterns of variation and correlation in quality control characteristics of segment installation: A case study
[ "Quality control", "Tunnel boring machine" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Segmental tunnel linings currently have wide applications throughout the world. This kind of lining is implemented using some of the types of tunnel boring machines (TBMs). Evaluation of the quality and performance of a tunnel lining during the installation of segments are the main objects of tunneling projects. Inspecting the quality of the fitted segments into segmental lining rings is done based on the quality control instructions provided by the quality assurance team. An installed ring is composed of four different segments which are locked into each other. For each ring selected for inspection, 16 different characteristics are measured and the observations are recorded. It is a fact that the more the number of the inspected rings increases, the more the gathered data analysis is complicated.\n2. Segmental lining in mechanized boring tunnel: Boring of tunnels is currently done either fully mechanized or semi-mechanized. TBMs are one of the best known pieces of mechanized equipment for the excavation of long tunnels. In the excavation of tunnels using a TBM, segmental lining is one of the most important processes. In segment installation, a tunnel is supported by ring-shaped steel-reinforced lining segments or lining segments, immediately after drilling under the protection of the shield. Variously shaped lining segments, which in most cases are temporarily screwed into place, are used for this purpose. The lining segments are moved into position by an erector, a remote-controlled gibbet, which picks them up mechanically or by suction. The segments are then screwed into place by qualified personnel. The annulus between the lining segment's external diameter and the ground is continuously filled with mortar or pearl gravel in order to bed and stabilize the tunnel.\n3. Multivariate analysis of segmental lining data: Kashan is located in the province of Isfahan, in the hot and dry climatic area of Iran. The Golab tunnel is being constructed for transferring water from Isfahan to Kashan using a double shield TBM and it is approximately 10 km long. As a national project, the tunnel is being excavated and lined simultaneously using a hexagonal segmental lining system. Each ring consists of four hexagonal segments (one invert, one crown and two side segments) as shown in Figure 1.\n4. Conclusion: The quality of the segmental lining depends on adjusting 16 different characteristics that were measured by the quality control team. Using PCA, we have reduced the number of variables from 16 to a much smaller set of four uncorrelated variables that retain about 84 percent of the total variability in the original observations. The first principal component that accounts for about 63 percent of the variability in the original observations is related to extra excavation by TBM. The second component that explains 11.6 percent of the variation is associated with high pressure of jacks during the segments installation. The third component that accounts for about 6.6 percent of the variation is due to less injection of pea gravel. Finally, the fourth principal component that explains 3.3 percent of the variation is due to unbalancing of crown segments.\n", "role": "user" }, { "content": "Purpose: Evaluation of the quality and performance of a tunnel lining during the installation of segments are the main objects of tunneling projects. Because the quality is affected by several attributes, the purpose of this paper is an appropriate multivariate data analysis that is helpful in extracting applicable knowledge of the data collected regarding the related attributes of the initial installed rings.\nDesign/methodology/approach: Principal component analysis (PCA) is used to analyze the data obtained by the quality control team. The authors use canonical correlation analysis (CCA) to extract some linear combinations of the original attributes of the two groups that produce the largest correlations with the second set of variables.\nFindings: The authors reduce the dimensionality of the original data set for further analyses, and use a small number of uncorrelated variables rather than a larger set of correlated variables to take effective and efficient action to control the quality of the tunnel lining. The authors also explore the correlation structure and relationship between two main groups of characteristics used for assessing the quality of the installed rings. Then, instead of a large number of the original characteristics in the two groups, the authors can easily control these few to attain a reasonable quality for the tunnel lining.\nOriginality/value: This is a case study, and for each ring selected for inspection, 16 different characteristics are measured and the observations are recorded. The authors use PCA and CCA to analyse the data and interpret the results. Although the methods are not new, applying them to this data results in useful and informative outcomes and interpretation.\n", "role": "assistant" } ]
Values based leadership
[ "Leadership", "Organisation", "Values", "Leading others", "Self-leadership" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n__NO_TITLE__: The concept of values-based leadership is probably more relevant today than it has ever been. There is no doubt that, even if our clients and colleagues do not refer to it by this name, there is now an acute understanding of what we connect to and, just as importantly, what we do not connect to. Furthermore, we use this connection to make decisions about our lives: the places that we want to work, the products and services that we choose to buy, the leaders that we will follow. Issues in both financial services and other sectors over recent years have sharpened our awareness as to what we expect and want from leaders and organisations. And if we define that still further, we are looking for a values-based approach that gives us some form of connection. The impact of social media means that your organisational brand and values are not what you say they are but are what your colleagues and clients say they are based on their experience of your organisation. That is also true for us as individual leaders. What people experience of us, as leaders, is how they connect to us and the values that we bring to our leadership role.\nLeading self: Values-based leadership starts with you as a leader; how effectively you lead yourself and the personal values that define you as a leader. When we talk about values in a self-leadership context we are referring to behaviours - not just words - and consistently behaving in line with your stated values. This means having the self-awareness to understand who you are and what is important to you as a leader. It also means being prepared to show courage and the humility to role model your values to others which brings an authenticity to your leadership. We find in our work with leaders that this leadership authenticity, perhaps more than any other single factor, has the most powerful impact on drawing followers to leaders.\nLeading others: Leading others in a values-based culture means you, as the leader, helping groups to identify how they are going to work with each other. This incorporates team values, a combination of trust, respect and support, collaboration and finally interdependency. Team values represent the collective values of the whole team. Leading others is about a willingness to lead people so that you, as their leader, are able facilitate opportunities for the team to identify and commit to team values. Then they know who they are, how to work to their strengths and how they work together to become a highly performing team.\nLeading organisation: At an organisational level values-based leadership is about creating connection: a connection with the purpose of the organisation, a connection with the vision for the organisation and a connection with the values that will underpin the purpose and vision.\n", "role": "user" }, { "content": "Purpose: The purpose of the paper is to explore the power of values based leadership in connecting colleagues and clients to an organisation and in doing so creating sustainable business success.\nDesign/methodology/approach: The paper is based on a model of values based leadership and case study examples of where the model is being successfully used in organisations.\nFindings: It was found that the organisations cited in the article who are using a values based approach such as Apple and Unilever create a connection with their organisations, that has a significant impact on organisational performance.\nOriginality/value: The paper uses a new model of values based leadership to demonstrate its impact on organisational performance. The different elements of the model are explored using current successful organisational case studies. It will therefore be of value to anyone using or considering a values based approach to developing an exciting and successful leadership culture.\n", "role": "assistant" } ]
Artificial intelligence-based training learning from application
[ "Training", "Learning", "Development", "Artificial intelligence", "Training effectiveness" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nPurpose and introduction: With accelerated developments in technology, the future of work is poised for change. Many people are curious to learn how artificial intelligence (AI) will disrupt their jobs and work processes. Some experts believe AI will create new lines of business and jobs and will assist employees to do their job more efficiently and effectively. Many existing jobs such as paralegals, journalists, office workers, and computer programmers are under threat because of the growth of AI and organizations have started to realize the need to re-skill and train their existing workforce. Training and development functions are also witnessing a significant transformation due to recent applications of AI.\nTheory and artificial intelligence enabled training: The metacognitive theory holds that learners learn best when they are aware of their weak areas (Posner, 2017). With the help of pre-training assessment, quizzes, tests and feedback, an AI platform captures data and improves trainee's awareness of weak areas. The theory of Deliberate Practice holds that knowing weak areas helps us focus our practice (Posner, 2017). The weak areas are targeted and improved by continuously adjusting content with AI-based training system. The Theory of Fun for Game Design proposes that learners are most engaged when challenged but not too challenged (Posner, 2017). AI platforms can boost the learner's confidence by serving a question that provides a quick win. Ebbinghaus Forgetting Curve holds that for long-term memory, learner needs to reinforce the learning with practice just before he or she is about to forget it (Posner, 2017) and AI-based platforms are able to do this.\nApplication of artificial intelligence in training: Customized, personalized, and constantly updated\nDiscussion and future of artificial intelligence-based training: Benefits of AI-based training extend beyond productivity gains. Benefits include learning at own pace leading to higher retention. AI-based expert systems can examine the training activity data, identify weaknesses, and suggest corrective actions. With a large pool of curated and tagged content, AI tools can create significant variations of personalized and customized content and test questions. Automatic content translation in multiple languages is leading to saving of time and money. An AI-based system can assess a learner's behavior, cognitive, and engagement preferences and align the learning with business outcomes. AI facilitates learning through trial and error methods.\n", "role": "user" }, { "content": "Purpose: This paper aims to discuss the rationale, theoretical foundation, application, and future of artificial intelligence (AI)-based training.\nDesign/methodology/approach: A review of relevant research papers, articles and case studies is done to highlight developments in research and practice.\nFindings: AI-based training systems are smart, intelligent and expert in handling queries. These systems can curate content, grade, evaluate, and provide feedback to trainee, thus making learning adaptive and contextual.\nPractical implications: Application of AI is vital in the field of training, as it helps personalization and customization of training programs to increase the effectiveness of training.\nOriginality/value: Executives and researchers can save time by reading relevant information on the linkage, and its contribution to AI is discussed and summarized in an easy to read format.\n", "role": "assistant" } ]
A cross-national study of apparel consumer preferences and the role of product-evaluative cues
[ "China", "Fashion", "Cross-national study", "Canada", "Consumer behaviour", "Product-evaluative cue" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Due to company growth strategies, economies of scale and/or the saturation of domestic markets, many fashion companies seek opportunities abroad to drive their businesses. For example, H&M has grown from fewer than 500 stores worldwide in 1993 to 3,132 stores in 2013 (Peterson, 2014). A report published by IDC (2014) found that \"82 per cent of US businesses are now looking overseas in a bid to grow sales\". The independent market research company eMarketer (2015) anticipates that total global retail sales will increase from US$21.3 trillion in 2014 to US$27.9 trillion in 2019, with a 5.5 per cent year-over-year growth rate.\nApparel product cues: Consumers often base their evaluations of a product on a number of descriptive, inferential, informational and/or visual cues. These attributes may be classified as extrinsic or intrinsic cues. \"Extrinsic cues\" refer to those attributes that can be changed without changing the product's physical structure or appearance, such as price, brand name and country of origin (Hines and Swinker, 2001). \"Intrinsic cues\" refer to the concrete characteristics that are inherent to the physical composition of a product such as fabric, fit, colour and style. Any manipulations or modifications of these intrinsic cues will alter the physical appearance of a product (Hines and Swinker, 2001; Olson and Jacoby, 1972). Intrinsic cues can also be categorised into two types of utility: psychic and physical (Swan and Combs, 1976). Psychic utility is related to the aesthetic aspects of a product such as colour and style, whereas physical utility is related to performance aspects such as durability and comfort.\nExtrinsic cues: Economic differences and price\nIntrinsic cues: Cultural dimensions and style\nResearch methodology: Questionnaire survey\nFindings: The participants were predominately female, although there was a relatively higher proportion of males among the Chinese respondents than among the Canadians. Specifically, the Canadian sample was composed of 358 females (81.4 per cent) and 75 males (17.0 per cent), while the Chinese sample was composed of 282 females (58.3 per cent) and 166 males (34.3 per cent). The difference in gender distribution was significant in both samples. This may be attributed to the nature of the topic of study: a previous study conducted by Quigley and Notarantonio (2009) found that women are more engaged and interested in fashion than are their male counterparts. As shown in Table I, the subjects from these two countries were relatively young, with a large number of them being students. Most fell between 18 and 27 years of age, with the mean age being 24.9 and 26.2 years for the Canadian and Chinese consumers, respectively. It is worth mentioning that 24 per cent of the Chinese participants preferred not to disclose their age.\nDiscussion and conclusion: According to our findings, there was no significant difference in how Canadians and Chinese consumers evaluated the price cue in apparel shopping. Both groups of consumers were concerned about product price when they shopped for clothing regardless of economic inequality between nations. Although per capita income is higher in Canada, consumers in our study often used price to guide and justify their spending. This may be due to the fact that many participants were students (as shown in Table I), which made them more likely to have limited budgets to spend on consumer goods. The lower income of Chinese in general and the student status of our Canadian respondents mean that it is not difficult to understand why both Canadian and Chinese participants used price as an indicator of product quality and/or monetary sacrifice (Monroe, 2003), helping to compare alternatives and determine purchasing decisions. This finding may also provide support for the notion that young consumers have a greater tendency to believe in a price-quality relationship than do their older counterparts (Johnson and Kellaris, 1988).\nLimitations and future research: The present study has several limitations as many studies have. First, the findings cannot be legitimately generalised to fit all countries. More similar research will be required in other parts of the world to increase the validity and reliability. Second, the effects of the extrinsic and intrinsic evaluative cues vary among different product categories/types. Therefore, it would be meaningful to conduct additional research using different consumer goods and/or specific apparel products. Third, the majority of the data were collected from a relatively young age demographic; therefore, the results of this study lacked generalisability to other segments of the population. Fourth, in-depth information can be gained from employing qualitative methods (one-on-one in-depth/focus group interviews) to generate insights of actual shopping experiences. Nevertheless, the results of this study provide meaningful insights and add beneficial information to the existing literature by presenting salient effects and correlations among various evaluative cues.\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to investigate and identify the salient effects of apparel evaluativ1e cues in order to enrich our understanding of consumer preferences and behaviour in two different socio-cultural contexts - Canada from the west and China of the east.\nDesign/methodology/approach: Online and paper questionnaires were used to collect data from Canada and China. Based on the prior research, 14 hypotheses were developed, and SPSS statistical analyses were used for this study.\nFindings: According to the findings, Canadian and Chinese participants used price as an indicator of product quality and/or monetary sacrifice. Overall, the consumers from both countries were more concerned about the garment fit and style than brand name and country of origin. It is imperative for fashion practitioners to prioritise their resources and focus more on product research/design and prototype development. Fit, comfort and fabric were strongly correlated except in the \"fit and comfort\" of the Canadian sample. In addition, durability, ease of care and wardrobe coordination play a relatively less significant clothing evaluative role than many other product cues.\nOriginality/value: There are relatively few research studies have focussed on apparel consumer behaviour, and the salient impact of product-evaluative cues - particularly from a cross-national perspective. This study covers a wide array of important evaluative cues, and provides meaningful insights to both fashion academicians and practitioners. This is one of the few studies provide an in-depth and comprehensive report on the role and effects of apparel product cues.\n", "role": "assistant" } ]
Social network fatigue affecting continuance intention of social networking services: The case of WeChat users in China's universities
[ "Continuance intention", "Social networking services", "SNS users", "Social network fatigue", "Uses and gratification theory" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Social networking services (SNSs) have become one of the routine channels to extend or maintain individuals' social connections, and they enable millions of users to socially interact with one another by overcoming time and geographic differences in a cyber-environment (Chai and Kim, 2012; Hsu et al., 2014). A variety of SNS providers, such as Facebook, Twitter, Instagram and WeChat, provide users with more easy-accessible approaches to communicate with each other. In recent years, the rapid development of mobile internet further promotes this trend. Many applications (App) of SNS especially designed for mobile devices are continuously emerging. As the functions and features of many SNSs have become more and more similar, competitions among those SNS providers inevitably exist. For any SNS, its durable viability and prosperity depend on users' continuous usage behavior rather than their first adoption of it. Therefore, exploring how to facilitate users' continuance intention is necessary. But on the other hand, as users are always encouraged to pay continuous attention to SNSs and sometimes have to respond an overwhelming volume of social demands timely (Lee et al., 2016), more and more energy and time are required. These users are more likely to suffer tiredness, boredom and strain. Some scholars named this phenomenon as social network fatigue.\n2. Theoretical background and hypotheses: 2.1 Uses and gratifications theory\n3. Methodology: 3.1 Measurement development\n4. Data analysis and results: We employed partial least square (PLS) SEM with SmartPLS 3.2.7 to test the model. PLS-SEM is more appropriate to deal with research data that may not meet the assumptions of multivariate normality, and thus it can be regarded as a suited statistical analysis tool when sample size is relatively small (Birkinshaw et al., 1995). Besides, compared with co-variance-based SEM, PLS-SEM is a more suitable method for exploratory and prediction-oriented studies such as the current research (Chin et al., 2003). As suggested by Anderson and Gerbing (1988), a two-step approach for data analysis was used in this research. The first stage was to analyze the measurement properties of constructs, and the second stage was to test the structural relationships among constructs.\n5. Discussion: According to the results of data analysis, it can be found that most of our hypotheses are supported. First, the results suggest that social network fatigue will not only directly weaken SNS users' continuance intention, but also negatively influence the formation process of continuance intention. For one thing, social network fatigue will directly and negatively affect continuance intention. It is consistent with some previous studies in the context of SNS (Lin, 2015; Ravindran et al., 2014; Zhang et al., 2016), which social network fatigue is an important reason for SNS users' discontinuance intention. For another, the relationship between information seeking and continuance intention and the relationship between perceived enjoyment and continuance intention can be negatively moderated by social network fatigue. As indicated in Figures 3 and 4, when SNS users experience a high level of social network fatigue (SNF at +1 standard deviation), the gratifications acquired from information seeking and perceived enjoyment can be weakened, which will then alleviate their continuance intention. Therefore, this study indicates the effect of social network fatigue on users' continuance intention from a more comprehensive perspective.\n6. Implications and limitations: 6.1 Theoretical implications\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to explore how social network fatigue affects continuance intention of social networking services (SNSs) from a more comprehensive perspective.\nDesign/methodology/approach: A research model was proposed by integrating uses and gratification theory (UGT), social network fatigue and continuance intention. The relationships between the three dimensions of UGT and continuance intention were examined, as well as the direct and moderating effects of social network fatigue. To collect data, an online survey was adopted, and 227 valid responses were analyzed by structural equation modeling.\nFindings: The empirical findings demonstrate that utilitarian gratification (information seeking), social gratification (social interaction) and hedonic gratification (perceived enjoyment) are positively associated with SNS users' continuance intention. Meanwhile, social network fatigue can directly and negatively affect continuance intention. Furthermore, the relationships between the two dimensions of UGT (information seeking, perceived enjoyment) and continuance intention are also negatively moderated by social network fatigue.\nOriginality/value: This study contributes to the understanding about the relationship between continuance intention and social network fatigue. The authors indicate that social network fatigue will not only directly weaken SNS users' continuance intention, but also negatively influence the formation process of the continuance intention. Some other theoretical and practical implications are also provided.\n", "role": "assistant" } ]
Enhancing the value of facilities information management (FIM) through BIM integration
[ "Construction", "Building information modelling", "Information value", "Facility information management" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Building information modelling (BIM) has significant potential to bring positive impact to the sustainable delivery of construction project goals (Poirier et al., 2015). It provides a robust platform to exchange, reuse and update asset information, which is an essential requirement for a successful through-life management of a facility. BIM integrates several project dimensions such as time, cost, quality and environment into a single platform through an intelligent multi-dimensional model. It has recently become more popular within the UK construction industry. Having analysed, through life adoption of BIM within the UK construction projects, Eadie et al. (2013) reveal 55 per cent of construction organisations adopt BIM during design and construction phases, however, only 9 per cent during the facilities management (FM) phase.\n2. FIM in construction projects: Literature reveals construction as a heterogeneous and enormously complex industry. The industry produces one-off products, many of which have longer lifecycles. Supply chain integration has been identified as one of prominent considerations for effective delivery of construction products and processes. The project stakeholders have different levels of interests and influence towards the project goals. Therefore, the information flows of construction project are enormously complex, scattered and multi-dimensional.\n3. Information value: The term \"value\" is multifaceted, providing different meanings to different stakeholders. Simply it is the \"cost\" over \"benefits\", which represent the ultimate worth of the considered matter (Neal and Strauss, 2008). The costs and benefits can be communicated in different ways. Repo (1986) explained a dual approach as \"exchange value\" and \"value in use\". Exchange value refers to the market value of information when it is regarded as a product or service. On the other hand, value-in-use refers to the benefits of information to its users, which include both subjective and objective perspectives.\n4. BIM in AECO environments: BIM is identified as one of \"emerging approaches to design, construction, and facilities management, in which a digital representation of the building process is used to facilitate the information exchange and interoperability in a digital format\" (Eastman, 2011). In a construction context, it appears as an architectural, engineering, construction and operations (AECO) digital modelling process (McGraw-Hill, 2009; Succar, 2009). BIM is also introduced as a technology based process, which aims to enhance the performance of a built asset with through-life considerations (Love et al., 2013). In the context of FM, it holds more credit as a process; however, there is an ongoing debate on whether BIM is a mere \"process\" or a \"tool/technique\" (British Institute of Facilities Management, 2012). Literature explains a variety of definitions for BIM in AECO environments. Having recognised the holistic nature of construction, this paper considers BIM as \"a set of interacting policies, processes and technologies generating a methodology to manage the essential building design and project data in digital format throughout the building's life-cycle\" (Penttila, 2006). BIM is considered one of the mechanisms which assist project stakeholders on informed decision making and apposite coordination by providing correct information at the right time which may lead towards waste reduction (Love et al., 2013). However, limited attention has been paid to identify the whole life benefits of BIM in AECO environments.\n5. Research methodology: A preliminary literature review was undertaken to identify the existing knowledge gap and to formulate the research question. At the second stage, a snowballing approach was adopted to carryout interviews to study the context and main themes, which are \"value of information\", \"facilities management\" and \"application of BIM in FM functions\".\n6. Data collection and analysis: Snowball sampling method was used to select the suitable participants for the interviews. This sampling method provides the opportunity to gather data from an appropriate population under the theory of those members with a special relevance are familiar with each other in the population (Penrod et al., 2003).\n7. Results and discussion: The UK Government's contribution towards adoption of BIM in construction businesses is showing continuous growth over the past decade. Although it is mandatory to adopt BIM Level 2 in all publicly procured construction projects from 2016 onwards, its adoption is still dominant in the design and construction stages (Eadie et al., 2013; Love et al., 2013; Volk et al., 2014). With this influence, BIM is still recognised as mere technology used during design and construction stages, and has only limited knowledge and experience on its impact on the future of the organisation as an information tool for operation and maintenance of built asset over its life cycle (Love et al., 2013).\n8. Conclusion: The construction sector should focus on \"product-service delivery\" rather than looking at the products and services separately. The limited awareness of BIM and its wider potentials by the client and facilities managers is the main barrier for adoption of BIM within FM (Giel and Issa, 2015). The initial motivation to adopt BIM in FM highly depends on the quality of information passed through the BIM processes. Considering the fact that having information is always useful, facilities managers tend to request all possible information about the built asset. Information suppliers conclude the reason for this as information users' poor understanding about the required information. Therefore, it is necessary to understand the specific uses of information to cooperate with a high-tech information society. If not, BIM would be misused as another type of software that replaces the maintenance documents storeroom.\n", "role": "user" }, { "content": "Purpose: Building information modelling (BIM) provides a robust platform for information management in built environment facilities. However, one of the consequences of the limited application of BIM in facilities information management (FIM) is that the potential value gain through the integration of as-built information during the operations management process has had limited exploration in current practice. The purpose of this paper is to explore the potential impact of BIM to enhance the value of FIM.\nDesign/methodology/approach: A detailed literature review was undertaken to identify BIM application in a construction context, and to develop a framework to investigate the value of information. An interpretative approach was adopted for data collection and analysis. A total of 14 semi-structured interviews were conducted with construction industry professionals to identify how value of FIM can be enhanced through BIM integration. The interview data were analysed using open and selective coding.\nFindings: The findings confirm that information exchange between the construction and facilities management (FM) phases of a project are important in terms of efficient and effective maintenance of a facility as well as optimising the design task. With these promising benefits, BIM is an efficient mechanism to facilitate construction information exchange. However, there is an uncertainty over the optimum level of information that ought to be on a BIM model for FM purposes. The relationship between different aspects of value is a starting point to filter the required information for each individual project. In contrast, limited awareness of value of information exchange and the potential of BIM enabled FIM during construction is noted.\nResearch limitations/implications: The information exchange considered within this investigation was limited to two key phases of the facility life cycle, namely, construction and FM (in-use).\nPractical implications: The findings bring insight into an unseen aspect of FM information needs that should be given priority in upcoming BIM developments. Also, it draws attention to how value is concerned in a daily basis beyond monetary terms.\nOriginality/value: The investigation of value enhancement through BIM integration in particular to FIM and ongoing research with new value dimensions.\n", "role": "assistant" } ]
True, fake and alternative: a topology of news and its implications for brands
[ "Semiotics", "Brand communication", "New media", "Fake news", "Semiotic square" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Recently, Berthon and Pitt (2018) highlighted the symbiotic relationship between fake news and brands. Brands both finance and enable fake news by paying to support the platforms hosting these stories. They also legitimize fake news by co-presence, for example advertising adjacent to a fake story. The effect also runs the other way: on the one hand, brands can become the target of fake news, and on the other hand, brands can become contaminated by association with fake news.\nFake news and brand credibility: While the phrase \"fake news\" has become more commonplace in current discourse, and varying degrees of truth or falsities have been conveyed in brand communications, there is not a significant body of research that focuses on formally identifying the different types of information shared through brand messaging. Majority of the literature on fake news resides in journalism, media and communication studies (Balmas, 2012) as well as computing and information technology (Rubin et al., 2015, 2016). The topic is gaining steam in disciplines such as law and policymaking (Klein and Wueller, 2017).\nThe semiotic square: The \"language turn\" in philosophy began about a century and half ago (Blonsky, 1985), as interest in the relationship between the world and how it is represented flourished. For it became apparent that we have no unmediated access to the objective world and that the tools, such as language and other signs, that we use to understand the world play a role in what we see and how we interpret what we see. Semiotics, the discipline which seeks to explore how we use and interpret signs, lies at the heart of epistemology or theory of knowledge.\nBrand communications: real, fake, empty and ironic: In applying our typology of type of news to brands, we make two refinements. First, we move to brand communications rather and then brands in and of themselves. For at any one time and through time brands have multiple, heterogeneous and changing brand communications. Any one act of communication can range from real to ironic. Second, in our survey of news communications, the notion of intention - i.e. the motivation behind the act of communication, be that to accurately inform or deceptively misinform became apparent and are incorporated into the discussion of brand communications.\nTheoretical contributions and managerial implications: The concept of \"fake news\" has been discussed, mostly, in terms of its ethics and legality in past literature. Linguistically, the studies in information sciences focused on AI algorithms to detect and differentiate between fake and genuine news (Hardalov et al., 2016). We propose a structural lens to describe and identify alternative types of news one may encounter on news media in this age. This study would not only contribute to the brand management literature but with further investigation may also provide insights into the textual determinants adding to the information technology literature stream.\nConclusions: In this paper, we introduce Greimas' (1983) semiotic square and use it to unpack the fake/real news dichotomy. The resulting four type of news: real, fake, empty and ironic are then applied to brand communications. We conclude by stressing the dynamic nature of brand communications and explore further interpretive opportunities offered by the semiotic square.\n", "role": "user" }, { "content": "Purpose: Berthon and Pitt (2018) recently highlighted the symbiotic relationship between fake news and brands. This paper aims to draw on semiotics to refine the fake/real news dichotomy to a fourfold typology.\nDesign/methodology/approach: First, the authors turn to semiotics and review Greimas' (1966) semiotic square. Second, they use this framework to refine the fake/real news dichotomy into a four-fold typology. Third, they illustrate each type with a news report on the topic of climate change. Fourth, they apply this framework to reveal four types of brand: real, fake, empty and ironic.\nFindings: Given that brand communications are heterogeneous, the authors suggest that the typology can be reconceptualized as dimensions and brands communications decoded accordingly. They conclude by exploring further opportunities offered by the semiotic square for interpretive investigation.\nOriginality/value: The value of the paper lies in the novel use of the semiotic square to shed light on both news and brand communications.\n", "role": "assistant" } ]
Facebook users' loneliness based on different types of interpersonal relationships: Links to grandiosity and envy
[ "Loneliness", "Benign envy", "Malicious envy", "Grandiosity", "Para-social relationship" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: Prior research has suggested that one's number of friends and total time spent on Facebook have positive correlations with higher levels of depression (Moreno et al., 2012). It has also been suggested that a certain psychological mechanism, rather than the direct use of Facebook, might decrease psychological well-being (Moreno et al., 2012). For example, self-evaluation while using Facebook or \"Facebook surveillance\" can increase depressive symptoms (Blease, 2015; Tandoc et al., 2015). This may suggest that it is critical to understand how users' judgments about others' Facebook information and the emotions evoked by such information influence their psychological well-being. In the current research, we examined how other users' Facebook posts reflecting narcissism affect one's loneliness as a predictor of psychological well-being.\n2. Theoretical framework: 2.1 Grandiosity and loneliness\n3. Methods: 3.1 Participants and procedures\n4. Results: 4.1 Preliminary analysis\n5. Conclusion: 5.1 Discussion\n", "role": "user" }, { "content": "Purpose: The purpose of this paper is to investigate: first, the effects of a user's grandiosity on the loneliness of another user on Facebook who detected it in terms of his/her well-being status; second, the mediational role of envy between grandiosity and loneliness; and, third, whether different effects are given on narcissism, envy, and loneliness depending on social or para-social relationships on Facebook.\nDesign/methodology/approach: This study's focus is to investigate how observing others' grandiose behaviors impact on individuals' feeling of loneliness. The authors propose that this relationship is mediated by the feeling of envy. The authors further postulate that social relationships that participants may have with other Facebook users would play a key role in feeling different types of envy (i.e. malicious vs benign). Therefore, the current study employed a 2 (levels of grandiosity: high vs low) x 2 (social relationship: para-social vs social) between-subjects design.\nFindings: The authors found that one's grandiosity as reflected on Facebook significantly affects other users' loneliness through malicious envy. However, no moderated mediation via envy (either benign or malicious) was found within the social relationship group.\nOriginality/value: Social comparison generated by the use of Facebook was found to have an effect on the user's loneliness through the mediation of envy. In particular, the possibility that such effects could be triggered in para-social relationships was identified.\n", "role": "assistant" } ]
Food deserts: is it only about a limited access?
[ "Food marketing", "Distribution", "Food deserts", "Purchase factors", "Retail sale" ]
[ { "content": "Summarize the following paper into structured abstract.\n\n1. Introduction: From the mid-1990s onward, the identification of urban geographical areas with limited access to food retailers has received growing interest from the academic literature and from public health authorities (Apparicio et al., 2007; Larsen and Gilliland, 2009; Cummins and Macintyre, 1999). The term \"food deserts,\" used originally in Scotland at the beginning of the 1990s (Cummins and Macintyre, 1999, 2002), is commonly employed to refer to areas that are characterized by socio-economic deprivation, and by a poor access to food stores providing healthy and affordable food (Robitaille and Bergeron, 2013; Franco et al., 2009; Cummins, 2007; Zenk et al., 2005). The literature links food deserts to the scarcity of food retailers in specific geographical areas (Walker et al., 2011), such as \"urban zones with 10 stores or fewer and none with over 20 employees\" (Hendrickson and Eikenberry, 2006, p. 372), or even, in some cases, their complete absence (Beaulac and Kristjansson, 2009).\n2. What do we currently know about food deserts?: Robitaille and Bergeron (2013) define food deserts as sectors offering little access to food stores and located in the lowest income quintile. They make the point that the literature considers the concept of food deserts in terms of four features: the first feature is the definition of food environment components. Most studies include only supermarkets and sometimes convenience stores and fast foods. Apparicio et al. (2007), however, point out that from a purely geographical perspective, supermarkets are not the only healthy food retailers in food deserts, small specialist stores (e.g. butcher's, fishmonger's) or ethnic grocery stores can also provide healthy food (fruits and vegetables); the second feature concerns the measures of access and availability used in the studies: usually, these involve the distance between the place of residence and the food store (e.g. the lack of large supermarkets within a distance of 0.5 mile from the postal code centroids (Walker et al., 2011)) or the average distance from a given number of stores (e.g. the approximate time to walk to the nearest supermarket is 15 minutes (Babey et al., 2008)); the third feature concerns the method of identification of deprived areas: as an example, in the UK, people use indexes of deprivation from a socio-economic perspective, whereas in the USA, they use data regarding the poverty rate; and the fourth feature involves the poor access cut-off line to food stores: the line is determined to be one mile (1.6 km) in urban areas and ten miles (16.1 km) in rural areas, but those distances may differ in the literature, particularly for the urban areas where it varies from 450 m to 1.6 km.\n3. The qualitative study: 3.1 Study 1\n4. Study 2: The purpose of the study 2 was to adopt a confirmatory stance in order to examine the hypothesized relationships involving the constructs that were identified during the qualitative phase.\n5. Discussion and conclusion: This study identifies the factors influencing the purchase of healthy food products (i.e. fruits and vegetables) in food deserts, from the perspective of consumers. By so doing, we answer previous calls in the literature for a closer examination of the psychological level in the study of food deserts (e.g. Shaw, 2006, 2014). Besides, in contrast to previous studies (e.g. Beaulac and Kristjansson, 2009; Donald, 2013), this study is one of the first to take a consumer perspective and focus upon the residents' needs (hence, demand) in food deserts rather than solely upon issues of supply and context.\n", "role": "user" }, { "content": "Purpose: Previous research has extensively examined \"food deserts,\" where access to healthy food is limited. However, little is known of the buying behavior at the individual household level in terms of buying habits and consumption in these areas. The purpose of this paper is to determine to what extent other factors than access can account for the purchase of healthy food products, namely, fruits and vegetables.\nDesign/methodology/approach: This paper proposes to partially fill this gap through a qualitative (n=55) and quantitative (n=512) study of those people who are in charge of their household purchases in two food deserts in the city of Montreal.\nFindings: Results show that geographical access to supermarkets is not the main factor fostering the purchase of healthy foods (fruits and vegetables). Indeed, food education (e.g. information, simple recipes, cooking classes), associated with a changing mediation process through product diversification (e.g. availability of local products in bulk) and supply (e.g. farmers) seems to be more significant.\nResearch limitations/implications: Future studies could compare the results obtained through this study in different socio-demographic contexts. Longitudinal analyses could also increase the understanding of the social and commercial challenges.\nOriginality/value: In contrast to previous studies, the results show that geographical access to supermarkets is not the main factor fostering the purchase of fruits and vegetables. Indeed, food education (e.g. information, simple recipes, cooking classes), associated with a changing mediation process through product diversification (e.g. products in bulk) and supply (e.g. farmers) seem to be more significant.\n", "role": "assistant" } ]
Institutional theory and change: the deinstitutionalisation of sports science at Club X
[ "Case study", "Football", "Institutional work", "Deinstitutionalisation", "Institutional entrepreneurs" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Whilst institutional theory has become a popular explanation for individual and organisational action, it has often been critiqued for its tendency to focus on explaining how institutions endure, rather than on how they change (Baum, 2002; Dacin et al., 2002; Elsbach, 2002; Goodman et al., 1980; Oliver, 1992). One means by which researchers have sought to draw attention to sources of institutional change is in relation to the deinstitutionalisation of existing norms and practices; the processes by which institutions weaken and dematerialise (Scott, 2001, p. 182). The importance of deinstitutionalisation has been emphasised (e.g. Oliver, 1992; Maguire and Hardy, 2009; Zilber, 2002) with several such studies (following Scott, 2001, p. 184) stressing the importance of placing this phenomenon in a broader context of institutional change, as the weakening and disappearance of one set of beliefs and allied practices is likely to be associated with the arrival of new ones. Over the last ten to fifteen years, a new emphasis on understanding the role of actors in these processes has emerged within institutional studies - as witnessed in debates concerning institutional entrepreneurs as agents of institutional change (Maguire et al., 2004); stressing the purposive actions of individuals to alter existing institutions to suit their own ends. More recently, the concept of institutional work has extended our understanding of the role of agents in the creation, maintenance and destabilisation of institutions (Lawrence and Suddaby, 2006), emphasising the centrality of the situated practices of individuals and groups as they cope with and respond to the demands of their working lives.\nInstitutional entrepreneurs and deinstitutionalisation: There are a number of explanations as to why deinstitutionalisation might occur. One common explanation suggests that new practices simply displace old ones with new approaches being seen as preferable to existing arrangements (Hardy and Maguire, 2008, p. 202; Leblebici et al., 1991; Wicks, 2001); potentially reflecting a self-interested awareness that there are better alternatives so that the original practice loses its meaning (Ahmadjian and Robinson, 2001; Davis et al., 1994; Maguire and Hardy, 2009), or through the undermining of core assumptions and beliefs. Another explanation is that practices can simply be abandoned (Oliver, 1992), or that because the micro level of validation of the institution is discontinued, the macro level of the institution falls away (Zucker, 1988). Other explanations concerning deinstitutionalisation highlight coercive force (Fligstein, 1990) or emphasise the institutional work of disconnecting sanctions (Lawrence and Suddaby, 2006, p. 235), where state and non-state actors work through state apparatus to disconnect them from some sets of practices, technologies or rules (Jones, 2001; Leblebici et al., 1991). This kind of coercive work aimed at disrupting institutions involves defining and redefining sets of concepts in ways that reconstitute actors and reconfigure the relationships between them - often effecting large-scale, revolutionary change (Ahmadjian and Robinson, 2001). Deinstitutionalisation also may occur through contestation between alternative institutions (Clemens and Cook, 1999; D'Aunno et al., 2000; Seo and Creed, 2002), an argument that resonates with this study.\nResearch methods: Answering the research questions required an extended period of data collection that lasted over seven years. Because the study sought to understand and capture institutional and deinstitutional processes over this period, involving a process of substantial change, a case study approach was deployed. It was chosen because the essence of a case study is that it tries to illuminate a decision, or set of decisions and the rationales for their enactment as well as their results. It has the ability to illustrate or explain the motivations that underlie the observed processes thus providing a rich, multi-dimensional picture of the organisation being studied (Marshall and Rossman, 1999; Remenyi et al., 1998; Yin, 2003). It also involved an extensive immersion in the organisation by the first author involving extended periods of time being spent at the organisation. The \"everyday\" thinking of the \"subjects\" of the research is especially useful when seeking to uncover how organisational actors institutionalise and deinstitutionalise practices over an extended period of time. An ethnographic approach such as the one adopted here, in alliance with the case study, provided a rare opportunity to observe the complex interaction of sports science practices and institutionalisation within a setting the actors had been substantially responsible for creating. Three main data collection strategies were used: non-participant observation, semi-structured interviews, and archival data gathering. Multiple techniques allowed us to \"triangulate\" findings from different sources of data for the purpose of understanding the organisation and the sports science institution. As stated, the study is based within a single organisational setting\nData analysis: The primary data were used to identify the institutional work carried out to create, maintain and disrupt the institution (Lawrence and Suddaby, 2006). In order to achieve these accounts of institutional and deinstitutional work, we systematically analysed all of the interview transcripts, field notes and other relevant documents in order to identify first-order concepts (Van Maanen, 1979) concerning practical actions respondents engaged in that (first of all) pertained to the creation and maintenance of sports science, and then, latterly, to its deinstitutionalisation. In the second stage of analysis, we engaged in axial coding (Strauss and Corbin, 1998) to build up more abstract and robust descriptions of institutional work. Once these categories were developed, they were fleshed out with examples of quotes taken from the primary data.\nFindings: The creation and maintenance of sports science\nThe deinstitutionalisation of sports science: In May 2007, manager 1 left Club X and was swiftly employed by another club. Respondent 2, as well as some of Club X's sports scientists, swiftly joined manager 1 and respondent 3 was headhunted by another EPL team soon after. Manager 2 who had been employed as manager 1's assistant manager for 18 months prior to his departure succeeded manager 1. The deinstitutionalisation of sports science was largely achieved within six months with the deinstitutionalising work falling into four categories: asserting dominant institutional logics; reconfiguring institutional vocabularies and the contents of conversation; destabilizing the affective environment, and disrupting a community of practice.\nDiscussion: The overall focus of this study was to identify how an institution, maintained and extended over time and seen to be key to sustaining performance was deinstitutionalised; to identify the antecedents of deinstitution at Club X and to analyse the institutional work that was carried out to create, maintain and disrupt the sports science institution. Here, the implications of the findings for institutional theory are outlined. The implications for organisational practice are explored in the closing conclusions to the paper.\nConclusions: This account illustrates how institutional entrepreneurs and agents may play a role in disrupting the very institutions they have been personally involved in creating and maintaining. It is a defining characteristic of an institution that it is impersonal and generalizable and thus portable across time and space. But the role played by performance pressures and the need for inimitability highlights that there might be reasons why institutional entrepreneurs might be disruptive to institutions with which they are associated. The three institutional entrepreneurs in this case study incorporated the institution as part of their own inimitable practice and social capital. Whilst this secured competitive advantage, the ways by which the institution was constructed and maintained largely by customisation, secrecy, inimitability, and social complexity meant that when they departed, the continuity of the institution was at risk if there was no effective succession. In this way the antecedents of deinstitutionalisation lie in how the institution was created and maintained. A potential contribution to organisational practice from this study therefore lies in the role effective human resources staff could play in ensuring the continuity and extension of an institution. Had the HR function been cognisant of the dual identity that manager 1 possessed, and the importance of his personal understanding of the sports science institution - as well as the institutional work involved to create and maintain it - then it is possible that a more effective leader succession plan might have been put into place which could have identified a more appropriate successor. It might also have ensured that better induction and personal development activities could have been provided for him and potentially ensured the continuation of an institution allied to organisational performance.\n", "role": "user" }, { "content": "Purpose: This paper aims to investigate how sports science was institutionalised and rapidly deinstitutionalised within a Premier League football club. Institutional theory has been critiqued for its lack of responsiveness to change, but recent developments within institutional theory such as the focus on deinstitutionalisation as an explanation of change, the role of institutional entrepreneurs and the increasing interest in institutional work facilitate exploration of change within institutions.\nDesign/methodology/approach: The authors deploy a longitudinal case study which ran from 2003-2011. Data was collected via observations, semi-structured interviews and through extensive literature reviews.\nFindings: Via this longitudinal case study, the authors illustrate that the antecedents of deinstitution can lie in the ways by which an institution is established. In doing so, they highlight the paradoxical role potentially played by institutional entrepreneurs in that they can (unwittingly) operate as agents of institutionalisation and deinstitutionalisation. Their study suggests that the higher the performance imperative within a field, the more likely the institution as a generic concept will be deinstitutionalised and the more likely to be appropriated and customised in order to gain inimitability and thus competitive advantage. Finally, the authors make an additional contribution by integrating the affective aspects of institutional work to their analyses; stressing the role played by emotions.\nResearch limitations/implications: As with many case studies, the ability to generalise from one case, however detailed, is limited. However, it provides evidence as to the paradoxical role that can be played by institutional entrepreneurs - especially in highly competitive environments.\nPractical implications: The study suggests that the HR function has a potential role to play with regards to institutional continuity through a focus on leader and institutional entrepreneur succession planning.\nOriginality/value: The paper makes an original contribution by highlighting both institutional and deinstitutional work within a single case. It highlights the paradoxical nature of institutional entrepreneurs in highly competitive environments and illustrates the importance of emotion to institutional maintenance and deinstitution.\n", "role": "assistant" } ]
Developing and evaluating a methodology for business process improvement
[ "Business process re‐engineering", "Research methods", "Assessment" ]
[ { "content": "Summarize the following paper into structured abstract.\n\nIntroduction: Business environments are complex. Almost everywhere organisations are undergoing rapid and significant changes driven by such pressures as customer expectations, new technologies, and growing global competition. As a result, many business processes within organisations are dynamic and constantly changing. In order to survive in such environments, practitioners are forced to continually revise their business processes to respond quickly to changes. There are some methodologies and tools available to help businesses improve their processes, however, none of these adequately support the practitioner through all stages in the business process improvement (BPI) activity. On this basis, this paper addresses the limitation by presenting a research that has formed and tested a generic and practical workbook-based methodology, termed model-based and integrated process improvement (MIPI) methodology for implementing process-based change and improvement.The BPI workbook methodology has been developed on the basis of existing literature and initially refined through discussion with experts in the field. The tool was then tested in two stages; first a primary application by the researcher in the public sector by means of case study. After refinement, the methodology was more widely tested through case studies, without researcher intervention, at three service sector companies. Through this process the methodology has been shown to be feasible, usable and useful.This paper is structured to firstly present the need for an effective methodology and evaluation. A background of literature in BPI is first developed, followed by detailed research aim and programme. A description of the method used to develop and evaluate the methodology is presented. This is followed by an overview of the complete BPI methodology. The final section concludes this paper and identifies areas for future research.\nBackground: This section sets the context to this work by highlighting the need for a structured methodology for BPI, and also that such methodology must be validated and refined through a systematic and rigorous evaluation.Business process improvement as a concept\nResearch aim and programme: As identified in the previous section, there is an important need to expand the capability of existing BPI methodologies to include structured and procedural aspects. The work described in this paper has been a first step in this process. Here, the aim has been to develop a practical methodology to support the implementation of business process improvement and to validate its effectiveness in organisations. Therefore, there are three objectives that have been chosen to address this, namely to:1. Develop a structured and procedural method of business process improvement.2. Seek expert opinions to identify and contrast this method against leading industry practice.3. Evaluate and refine the method through practical application.Each objective has been addressed in turn by a specific phase of research and these are described as follows.Phase 1: forming the initial structure and content of the BPI methodology\nThe complete BPI methodology: The methodology presented in this section is the final outcome of an iterative process from theory, practice, and case studies as described above. The BPI methodology has been given the acronym model-based and integrated process improvement (MIPI) methodology. It is a generic seven-step procedural approach that guides the actions and decisions of a process design team (see Figure 2). MIPI methodology can be used for both process improvement and reengineering initiatives. It addresses the \"what\" to do and \"how\" to make it happen with a participative team effort. It is a guide, not a procedure or manual. The structure for the final methodology shown in Figure 3 contains a hierarchical structure. Each step of the methodology includes: aim, actions, people involved, outcome/exit, checklists, hints and tips, and relevant tools and techniques.An outline of the content of MIPI methodology in Table I shows the steps, key activities and tools, and techniques. Future paper will publish the workbook MIPI methodology.Although MIPI methodology has been based on other frameworks for BPI/BPR, such as Kettinger et al. (1997); Davenport and Short (1990); Childe et al. (1994); Harrington (1991), it differs in the detail. For example, the methodology produced by Smart et al. (1998) is workbook-based but there is a clear difference in scope, as their methodology is targeted only at small-medium enterprises and does not formally consider post-BPR initiatives.\nConclusions and future work: This paper aims to contribute to both theory and practice. The BPI methodology developed here addresses the gap identified in the literature by capturing relevant elements and success factors to be considered in developing and evaluating an effective methodology for a BPI activity. The main outcome of this research is the creation of a practical and procedural methodology, to guide the practitioners through a series of well-defined structured steps necessary to make informed, consistent and efficient changes to business processes. It is a holistic workbook-based methodology with relevant tools and techniques, enabling the user to start the process-based change effectively at any point. The research has also contributed to the new knowledge on how to systematically assess a BPI methodology in practice. The process approach adopted provides a systematic, organised and participative procedure.MIPI methodology has been applied, tested and evaluated. The methodology met the assessment criteria of feasibility, usability and usefulness. It has been shown that benefits can be gained from using the methodology and that the main objectives of the research have been successfully achieved.This research has also exposed a series of issues for future work. First, the MIPI methodology needs to be further tested with more case studies to monitor its validity and generalisation. Second, investigation into the duration to complete the cycle needs to be carried out. Third, a framework for process maturity needs to be created to embed process users' skills into the methodology to reflect skill and knowledge competence. This would ensure that appropriate level of BPI methodology is tailored to match users' skill set. Finally, the whole MIPI methodology would benefit from being developed into an interactive web-based or CD-ROM application. This would offer better communication and self-learning, and also serve as a knowledge repository for the project team.\n", "role": "user" }, { "content": "Purpose: - There appears to be an ever-insatiable demand from markets for organisations to improve their products and services. To meet this, there is a need to provide business process improvement (BPI) methodologies that are holistic, structured and procedural. Therefore, this paper describes research that has formed and tested a generic and practical methodology termed model-based and integrated process improvement (MIPI) to support the implementation of BPI; and to validate its effectiveness in organisations. This methodology has been created as an aid for practitioners within organisations.\nDesign/methodology/approach: - The research objectives were achieved by: reviewing and analysing current methodologies, and selecting a few frameworks against key performance indicators. Using a refined Delphi approach and semi-structured interview with the \"experts\" in the field. Intervention, case study and process research approach to evaluating a methodology.\nFindings: - The BPI methodology was successfully formed and applied by the researcher and directly by the companies involved against the criteria of feasibility, usability and usefulness.\nResearch limitations/implications: - The paper has demonstrated a new knowledge on how to systematically assess a BPI methodology in practice.\nPractical implications: - Model-based and integrated process improvement methodology (MIPI) methodology offers the practitioner (experienced and novice) a set of step-by-step aids necessary to make informed, consistent and efficient changes to business processes.\nOriginality/value: - The novelty of this research work is the creation of a holistic workbook-based methodology with relevant tools and techniques. It extends the capabilities of existing methodologies.\n", "role": "assistant" } ]